Test Report: Docker_Linux_crio 22047

                    
                      4655c6aa5049635fb4cb98fc0f74f66a1c57dbdb:2025-12-06:42658
                    
                

Test fail (27/415)

x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable volcano --alsologtostderr -v=1: exit status 11 (252.409259ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:12.688144  512542 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:12.688310  512542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:12.688320  512542 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:12.688325  512542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:12.688547  512542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:12.688821  512542 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:12.689131  512542 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:12.689161  512542 addons.go:622] checking whether the cluster is paused
	I1206 09:14:12.689241  512542 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:12.689258  512542 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:12.689626  512542 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:12.707508  512542 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:12.707570  512542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:12.725979  512542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:12.817994  512542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:12.818086  512542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:12.847782  512542 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:12.847803  512542 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:12.847807  512542 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:12.847813  512542 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:12.847818  512542 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:12.847822  512542 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:12.847828  512542 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:12.847832  512542 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:12.847837  512542 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:12.847858  512542 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:12.847865  512542 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:12.847868  512542 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:12.847871  512542 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:12.847874  512542 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:12.847877  512542 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:12.847881  512542 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:12.847887  512542 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:12.847893  512542 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:12.847896  512542 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:12.847899  512542 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:12.847904  512542 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:12.847912  512542 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:12.847918  512542 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:12.847920  512542 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:12.847923  512542 cri.go:89] found id: ""
	I1206 09:14:12.847961  512542 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:12.862453  512542 out.go:203] 
	W1206 09:14:12.863636  512542 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:12.863652  512542 out.go:285] * 
	* 
	W1206 09:14:12.868114  512542 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:12.869573  512542 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.449087ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002687012s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004105233s
addons_test.go:392: (dbg) Run:  kubectl --context addons-101630 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-101630 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-101630 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.956423079s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 ip
2025/12/06 09:14:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable registry --alsologtostderr -v=1: exit status 11 (240.689825ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:37.944201  514513 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:37.944539  514513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:37.944551  514513 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:37.944556  514513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:37.944758  514513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:37.945021  514513 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:37.945383  514513 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:37.945407  514513 addons.go:622] checking whether the cluster is paused
	I1206 09:14:37.945507  514513 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:37.945527  514513 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:37.945901  514513 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:37.963355  514513 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:37.963410  514513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:37.980525  514513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:38.072603  514513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:38.072701  514513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:38.102998  514513 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:38.103030  514513 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:38.103034  514513 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:38.103037  514513 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:38.103040  514513 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:38.103045  514513 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:38.103048  514513 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:38.103051  514513 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:38.103054  514513 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:38.103064  514513 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:38.103067  514513 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:38.103069  514513 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:38.103072  514513 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:38.103075  514513 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:38.103078  514513 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:38.103090  514513 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:38.103098  514513 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:38.103102  514513 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:38.103105  514513 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:38.103107  514513 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:38.103110  514513 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:38.103113  514513 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:38.103115  514513 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:38.103118  514513 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:38.103121  514513 cri.go:89] found id: ""
	I1206 09:14:38.103171  514513 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:38.117732  514513 out.go:203] 
	W1206 09:14:38.118833  514513 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:38.118858  514513 out.go:285] * 
	* 
	W1206 09:14:38.121856  514513 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:38.122952  514513 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.41s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.038363ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-101630
addons_test.go:332: (dbg) Run:  kubectl --context addons-101630 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (255.497859ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:40.315379  515266 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:40.315722  515266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:40.315733  515266 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:40.315740  515266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:40.315953  515266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:40.316275  515266 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:40.316669  515266 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:40.316699  515266 addons.go:622] checking whether the cluster is paused
	I1206 09:14:40.316809  515266 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:40.316835  515266 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:40.317284  515266 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:40.335125  515266 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:40.335182  515266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:40.353025  515266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:40.447825  515266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:40.447911  515266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:40.482986  515266 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:40.483007  515266 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:40.483011  515266 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:40.483014  515266 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:40.483017  515266 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:40.483021  515266 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:40.483023  515266 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:40.483026  515266 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:40.483028  515266 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:40.483034  515266 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:40.483037  515266 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:40.483039  515266 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:40.483042  515266 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:40.483044  515266 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:40.483047  515266 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:40.483052  515266 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:40.483055  515266 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:40.483058  515266 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:40.483061  515266 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:40.483064  515266 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:40.483068  515266 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:40.483070  515266 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:40.483073  515266 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:40.483075  515266 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:40.483078  515266 cri.go:89] found id: ""
	I1206 09:14:40.483116  515266 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:40.499267  515266 out.go:203] 
	W1206 09:14:40.500504  515266 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:40.500532  515266 out.go:285] * 
	* 
	W1206 09:14:40.504578  515266 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:40.505974  515266 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-101630 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-101630 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-101630 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f731f3e8-a314-4880-9785-b066d8c00f18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f731f3e8-a314-4880-9785-b066d8c00f18] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00399098s
I1206 09:14:48.557092  502867 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.202435622s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-101630 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-101630
helpers_test.go:243: (dbg) docker inspect addons-101630:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95",
	        "Created": "2025-12-06T09:12:51.478087231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505345,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:12:51.506945744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/hostname",
	        "HostsPath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/hosts",
	        "LogPath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95-json.log",
	        "Name": "/addons-101630",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-101630:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-101630",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95",
	                "LowerDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-101630",
	                "Source": "/var/lib/docker/volumes/addons-101630/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-101630",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-101630",
	                "name.minikube.sigs.k8s.io": "addons-101630",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f992c1691876af5862657fbfe223814bf969fca236e2a4ad9a4022552816a151",
	            "SandboxKey": "/var/run/docker/netns/f992c1691876",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-101630": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7391d62a417b1088b41fe0868bc0021bc08c635885cb16409110efe92f7d10e1",
	                    "EndpointID": "bd713ef4fdcd52ffc6c64940a9b75b3eb8f70925f5a2a1d69e8b592980946ac4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "86:20:0b:7b:63:cd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-101630",
	                        "6796bdef6f09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-101630 -n addons-101630
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-101630 logs -n 25: (1.139943035s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-279469 --alsologtostderr --binary-mirror http://127.0.0.1:43659 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-279469 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ delete  │ -p binary-mirror-279469                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-279469 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ addons  │ disable dashboard -p addons-101630                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ addons  │ enable dashboard -p addons-101630                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ start   │ -p addons-101630 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-101630 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-101630 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ ip      │ addons-101630 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-101630 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ ssh     │ addons-101630 ssh cat /opt/local-path-provisioner/pvc-20d4bb10-c0ec-46a8-962a-05dd97216bc2_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-101630 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-101630                                                                                                                                                                                                                                                                                                                                                                                           │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-101630 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ ssh     │ addons-101630 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ addons  │ addons-101630 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ ip      │ addons-101630 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-101630        │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:12:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:12:31.412896  504702 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:12:31.413136  504702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:31.413144  504702 out.go:374] Setting ErrFile to fd 2...
	I1206 09:12:31.413148  504702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:31.413324  504702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:12:31.413876  504702 out.go:368] Setting JSON to false
	I1206 09:12:31.414760  504702 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6895,"bootTime":1765005456,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:12:31.414816  504702 start.go:143] virtualization: kvm guest
	I1206 09:12:31.416602  504702 out.go:179] * [addons-101630] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:12:31.417703  504702 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:12:31.417751  504702 notify.go:221] Checking for updates...
	I1206 09:12:31.419889  504702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:12:31.421039  504702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:12:31.422198  504702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:12:31.423326  504702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:12:31.424412  504702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:12:31.425751  504702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:12:31.449324  504702 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:12:31.449422  504702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:31.501556  504702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-06 09:12:31.492180721 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:31.501694  504702 docker.go:319] overlay module found
	I1206 09:12:31.503324  504702 out.go:179] * Using the docker driver based on user configuration
	I1206 09:12:31.504329  504702 start.go:309] selected driver: docker
	I1206 09:12:31.504341  504702 start.go:927] validating driver "docker" against <nil>
	I1206 09:12:31.504351  504702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:12:31.504933  504702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:31.557998  504702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-06 09:12:31.547832527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:31.558171  504702 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:12:31.558404  504702 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:12:31.560074  504702 out.go:179] * Using Docker driver with root privileges
	I1206 09:12:31.561159  504702 cni.go:84] Creating CNI manager for ""
	I1206 09:12:31.561227  504702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:12:31.561238  504702 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:12:31.561313  504702 start.go:353] cluster config:
	{Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 09:12:31.562599  504702 out.go:179] * Starting "addons-101630" primary control-plane node in "addons-101630" cluster
	I1206 09:12:31.563642  504702 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:12:31.564904  504702 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:12:31.566068  504702 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:31.566098  504702 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:31.566109  504702 cache.go:65] Caching tarball of preloaded images
	I1206 09:12:31.566172  504702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:12:31.566215  504702 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:12:31.566232  504702 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:12:31.566557  504702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/config.json ...
	I1206 09:12:31.566585  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/config.json: {Name:mk7ed1e2c38d36040bf6a585683d05bd81f4d33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:31.582888  504702 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:12:31.583014  504702 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 09:12:31.583031  504702 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 09:12:31.583036  504702 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 09:12:31.583043  504702 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 09:12:31.583051  504702 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 09:12:44.734872  504702 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 09:12:44.734914  504702 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:12:44.734962  504702 start.go:360] acquireMachinesLock for addons-101630: {Name:mk1e28ced48dde6057c3e722484e184aa9b7e960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:12:44.735065  504702 start.go:364] duration metric: took 80.862µs to acquireMachinesLock for "addons-101630"
	I1206 09:12:44.735088  504702 start.go:93] Provisioning new machine with config: &{Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:12:44.735166  504702 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:12:44.736766  504702 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 09:12:44.736992  504702 start.go:159] libmachine.API.Create for "addons-101630" (driver="docker")
	I1206 09:12:44.737031  504702 client.go:173] LocalClient.Create starting
	I1206 09:12:44.737171  504702 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:12:44.836295  504702 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:12:44.970292  504702 cli_runner.go:164] Run: docker network inspect addons-101630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:12:44.986888  504702 cli_runner.go:211] docker network inspect addons-101630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:12:44.986970  504702 network_create.go:284] running [docker network inspect addons-101630] to gather additional debugging logs...
	I1206 09:12:44.986989  504702 cli_runner.go:164] Run: docker network inspect addons-101630
	W1206 09:12:45.002121  504702 cli_runner.go:211] docker network inspect addons-101630 returned with exit code 1
	I1206 09:12:45.002152  504702 network_create.go:287] error running [docker network inspect addons-101630]: docker network inspect addons-101630: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-101630 not found
	I1206 09:12:45.002165  504702 network_create.go:289] output of [docker network inspect addons-101630]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-101630 not found
	
	** /stderr **
	I1206 09:12:45.002281  504702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:12:45.018948  504702 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163d350}
	I1206 09:12:45.018998  504702 network_create.go:124] attempt to create docker network addons-101630 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 09:12:45.019069  504702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-101630 addons-101630
	I1206 09:12:45.064974  504702 network_create.go:108] docker network addons-101630 192.168.49.0/24 created
	I1206 09:12:45.065004  504702 kic.go:121] calculated static IP "192.168.49.2" for the "addons-101630" container
	I1206 09:12:45.065074  504702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:12:45.081284  504702 cli_runner.go:164] Run: docker volume create addons-101630 --label name.minikube.sigs.k8s.io=addons-101630 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:12:45.099590  504702 oci.go:103] Successfully created a docker volume addons-101630
	I1206 09:12:45.099686  504702 cli_runner.go:164] Run: docker run --rm --name addons-101630-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101630 --entrypoint /usr/bin/test -v addons-101630:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:12:47.640995  504702 cli_runner.go:217] Completed: docker run --rm --name addons-101630-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101630 --entrypoint /usr/bin/test -v addons-101630:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (2.541253677s)
	I1206 09:12:47.641031  504702 oci.go:107] Successfully prepared a docker volume addons-101630
	I1206 09:12:47.641064  504702 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:47.641074  504702 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:12:47.641140  504702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-101630:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:12:51.407258  504702 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-101630:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.766062108s)
	I1206 09:12:51.407294  504702 kic.go:203] duration metric: took 3.766216145s to extract preloaded images to volume ...
	W1206 09:12:51.407395  504702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:12:51.407437  504702 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:12:51.407505  504702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:12:51.462437  504702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-101630 --name addons-101630 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101630 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-101630 --network addons-101630 --ip 192.168.49.2 --volume addons-101630:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:12:51.718783  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Running}}
	I1206 09:12:51.737651  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:12:51.755634  504702 cli_runner.go:164] Run: docker exec addons-101630 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:12:51.799606  504702 oci.go:144] the created container "addons-101630" has a running status.
	I1206 09:12:51.799642  504702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa...
	I1206 09:12:51.893128  504702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:12:51.917550  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:12:51.938728  504702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:12:51.938750  504702 kic_runner.go:114] Args: [docker exec --privileged addons-101630 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:12:51.984816  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:12:52.009096  504702 machine.go:94] provisionDockerMachine start ...
	I1206 09:12:52.009231  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.033169  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.033546  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.033574  504702 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:12:52.169514  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-101630
	
	I1206 09:12:52.169546  504702 ubuntu.go:182] provisioning hostname "addons-101630"
	I1206 09:12:52.169608  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.188919  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.189194  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.189210  504702 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-101630 && echo "addons-101630" | sudo tee /etc/hostname
	I1206 09:12:52.328676  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-101630
	
	I1206 09:12:52.328759  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.347583  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.347901  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.347932  504702 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101630/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:12:52.474936  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:12:52.474967  504702 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:12:52.475024  504702 ubuntu.go:190] setting up certificates
	I1206 09:12:52.475037  504702 provision.go:84] configureAuth start
	I1206 09:12:52.475103  504702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101630
	I1206 09:12:52.492559  504702 provision.go:143] copyHostCerts
	I1206 09:12:52.492632  504702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:12:52.492740  504702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:12:52.492803  504702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:12:52.492890  504702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.addons-101630 san=[127.0.0.1 192.168.49.2 addons-101630 localhost minikube]
	I1206 09:12:52.608060  504702 provision.go:177] copyRemoteCerts
	I1206 09:12:52.608127  504702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:12:52.608167  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.624952  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:52.717390  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:12:52.735583  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:12:52.752214  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:12:52.768659  504702 provision.go:87] duration metric: took 293.605715ms to configureAuth
	I1206 09:12:52.768686  504702 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:12:52.768874  504702 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:12:52.768990  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.786106  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.786319  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.786337  504702 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:12:53.052424  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:12:53.052451  504702 machine.go:97] duration metric: took 1.043320212s to provisionDockerMachine
	I1206 09:12:53.052482  504702 client.go:176] duration metric: took 8.315442728s to LocalClient.Create
	I1206 09:12:53.052509  504702 start.go:167] duration metric: took 8.315519103s to libmachine.API.Create "addons-101630"
	I1206 09:12:53.052517  504702 start.go:293] postStartSetup for "addons-101630" (driver="docker")
	I1206 09:12:53.052527  504702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:12:53.052588  504702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:12:53.052630  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.069709  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.162795  504702 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:12:53.166097  504702 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:12:53.166127  504702 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:12:53.166139  504702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:12:53.166204  504702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:12:53.166239  504702 start.go:296] duration metric: took 113.714771ms for postStartSetup
	I1206 09:12:53.166580  504702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101630
	I1206 09:12:53.183477  504702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/config.json ...
	I1206 09:12:53.183744  504702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:12:53.183803  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.201048  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.290146  504702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:12:53.294581  504702 start.go:128] duration metric: took 8.559401152s to createHost
	I1206 09:12:53.294605  504702 start.go:83] releasing machines lock for "addons-101630", held for 8.559527188s
	I1206 09:12:53.294684  504702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101630
	I1206 09:12:53.311960  504702 ssh_runner.go:195] Run: cat /version.json
	I1206 09:12:53.312014  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.312040  504702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:12:53.312123  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.330325  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.331079  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.474551  504702 ssh_runner.go:195] Run: systemctl --version
	I1206 09:12:53.481127  504702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:12:53.516588  504702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:12:53.521234  504702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:12:53.521321  504702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:12:53.545062  504702 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:12:53.545091  504702 start.go:496] detecting cgroup driver to use...
	I1206 09:12:53.545123  504702 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:12:53.545172  504702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:12:53.561187  504702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:12:53.572672  504702 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:12:53.572722  504702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:12:53.587854  504702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:12:53.603952  504702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:12:53.684401  504702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:12:53.767529  504702 docker.go:234] disabling docker service ...
	I1206 09:12:53.767598  504702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:12:53.786175  504702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:12:53.797951  504702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:12:53.879093  504702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:12:53.958283  504702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:12:53.970090  504702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:12:53.983283  504702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:12:53.983343  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:53.993102  504702 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:12:53.993160  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.001293  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.009243  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.017332  504702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:12:54.024821  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.032774  504702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.045285  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.053442  504702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:12:54.060162  504702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:12:54.066790  504702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:12:54.142493  504702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:12:54.275826  504702 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:12:54.275916  504702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:12:54.279851  504702 start.go:564] Will wait 60s for crictl version
	I1206 09:12:54.279901  504702 ssh_runner.go:195] Run: which crictl
	I1206 09:12:54.283250  504702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:12:54.308664  504702 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:12:54.308754  504702 ssh_runner.go:195] Run: crio --version
	I1206 09:12:54.335731  504702 ssh_runner.go:195] Run: crio --version
	I1206 09:12:54.364788  504702 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:12:54.365752  504702 cli_runner.go:164] Run: docker network inspect addons-101630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:12:54.382630  504702 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 09:12:54.386836  504702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:12:54.397230  504702 kubeadm.go:884] updating cluster {Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:12:54.397381  504702 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:54.397436  504702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:12:54.430480  504702 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:12:54.430509  504702 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:12:54.430558  504702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:12:54.456213  504702 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:12:54.456238  504702 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:12:54.456247  504702 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 09:12:54.456345  504702 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-101630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:12:54.456410  504702 ssh_runner.go:195] Run: crio config
	I1206 09:12:54.501101  504702 cni.go:84] Creating CNI manager for ""
	I1206 09:12:54.501129  504702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:12:54.501152  504702 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:12:54.501174  504702 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101630 NodeName:addons-101630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:12:54.501311  504702 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-101630"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:12:54.501372  504702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:12:54.509605  504702 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:12:54.509671  504702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:12:54.517231  504702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 09:12:54.529998  504702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:12:54.544312  504702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 09:12:54.556296  504702 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:12:54.559665  504702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:12:54.569007  504702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:12:54.645931  504702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:12:54.670619  504702 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630 for IP: 192.168.49.2
	I1206 09:12:54.670639  504702 certs.go:195] generating shared ca certs ...
	I1206 09:12:54.670663  504702 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.670795  504702 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:12:54.777420  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt ...
	I1206 09:12:54.777479  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt: {Name:mk4dab107adc72fe9ab137d87913311c42622b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.777711  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key ...
	I1206 09:12:54.777731  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key: {Name:mk373e3e449365234022f0260849cb6b80917be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.777876  504702 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:12:54.814495  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt ...
	I1206 09:12:54.814523  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt: {Name:mkc75d10b349bbd61defbab7a134a0ca10cef764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.814710  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key ...
	I1206 09:12:54.814729  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key: {Name:mkdc7ddf757b3685a0de21cbad18972d9eca2094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.814836  504702 certs.go:257] generating profile certs ...
	I1206 09:12:54.814918  504702 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.key
	I1206 09:12:54.814934  504702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt with IP's: []
	I1206 09:12:54.885246  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt ...
	I1206 09:12:54.885274  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: {Name:mke3a8fc1995c7e2da3188a157968d5258718f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.885483  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.key ...
	I1206 09:12:54.885501  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.key: {Name:mk5ad30806708ae35935900aaf4453acdeb14b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.885613  504702 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e
	I1206 09:12:54.885643  504702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 09:12:54.912517  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e ...
	I1206 09:12:54.912535  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e: {Name:mk688dbdf5b8d5eb5a8b3085973f549913e01b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.912673  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e ...
	I1206 09:12:54.912692  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e: {Name:mk8aa45ee08202d8a600fdd610b95296932f1d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.912786  504702 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt
	I1206 09:12:54.912888  504702 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key
	I1206 09:12:54.912954  504702 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key
	I1206 09:12:54.912979  504702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt with IP's: []
	I1206 09:12:54.950759  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt ...
	I1206 09:12:54.950781  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt: {Name:mk0ddbafc232fd30924fff603ec46c9e12bac8e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.950936  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key ...
	I1206 09:12:54.950958  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key: {Name:mkdc8387c784ad7beba3e2538592178e38b98aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.951184  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:12:54.951230  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:12:54.951268  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:12:54.951299  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:12:54.951913  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:12:54.970185  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:12:54.986977  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:12:55.003430  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:12:55.019916  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:12:55.036098  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:12:55.052242  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:12:55.068424  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:12:55.085700  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:12:55.104506  504702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:12:55.116431  504702 ssh_runner.go:195] Run: openssl version
	I1206 09:12:55.122500  504702 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.129436  504702 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:12:55.138771  504702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.142306  504702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.142351  504702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.176946  504702 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:12:55.184844  504702 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:12:55.192387  504702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:12:55.195921  504702 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:12:55.195966  504702 kubeadm.go:401] StartCluster: {Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:12:55.196039  504702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:12:55.196087  504702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:12:55.224224  504702 cri.go:89] found id: ""
	I1206 09:12:55.224308  504702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:12:55.232407  504702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:12:55.240030  504702 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:12:55.240075  504702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:12:55.247377  504702 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:12:55.247391  504702 kubeadm.go:158] found existing configuration files:
	
	I1206 09:12:55.247439  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:12:55.254725  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:12:55.254776  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:12:55.261873  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:12:55.269053  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:12:55.269105  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:12:55.276035  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:12:55.283210  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:12:55.283265  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:12:55.290280  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:12:55.297367  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:12:55.297417  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:12:55.304189  504702 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:12:55.340887  504702 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:12:55.340964  504702 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:12:55.372674  504702 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:12:55.372760  504702 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:12:55.372821  504702 kubeadm.go:319] OS: Linux
	I1206 09:12:55.372890  504702 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:12:55.372963  504702 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:12:55.373036  504702 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:12:55.373086  504702 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:12:55.373124  504702 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:12:55.373183  504702 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:12:55.373232  504702 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:12:55.373306  504702 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:12:55.431857  504702 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:12:55.432002  504702 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:12:55.432117  504702 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:12:55.438736  504702 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:12:55.440567  504702 out.go:252]   - Generating certificates and keys ...
	I1206 09:12:55.440672  504702 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:12:55.440725  504702 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:12:55.890546  504702 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:12:56.213485  504702 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:12:56.826407  504702 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:12:56.977161  504702 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:12:57.344352  504702 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:12:57.344519  504702 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-101630 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:12:57.517791  504702 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:12:57.517956  504702 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-101630 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:12:57.610101  504702 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:12:57.788348  504702 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:12:58.119946  504702 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:12:58.120032  504702 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:12:58.529052  504702 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:12:58.595001  504702 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:12:58.782981  504702 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:12:58.853000  504702 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:12:58.948208  504702 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:12:58.948661  504702 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:12:58.952171  504702 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:12:58.953353  504702 out.go:252]   - Booting up control plane ...
	I1206 09:12:58.953481  504702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:12:58.953582  504702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:12:58.954134  504702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:12:58.981815  504702 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:12:58.981942  504702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:12:58.988145  504702 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:12:58.988352  504702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:12:58.988402  504702 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:12:59.083725  504702 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:12:59.083873  504702 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:12:59.584669  504702 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.263368ms
	I1206 09:12:59.588590  504702 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:12:59.588725  504702 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 09:12:59.588855  504702 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:12:59.588920  504702 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:13:01.368519  504702 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.779846247s
	I1206 09:13:01.832111  504702 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.243451665s
	I1206 09:13:03.589987  504702 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001382803s
	I1206 09:13:03.605042  504702 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:13:03.614518  504702 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:13:03.622221  504702 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:13:03.622538  504702 kubeadm.go:319] [mark-control-plane] Marking the node addons-101630 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:13:03.629956  504702 kubeadm.go:319] [bootstrap-token] Using token: umpzxk.mrukc1mpg3pqm1t5
	I1206 09:13:03.631078  504702 out.go:252]   - Configuring RBAC rules ...
	I1206 09:13:03.631220  504702 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:13:03.635626  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:13:03.640299  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:13:03.642599  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:13:03.644711  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:13:03.647520  504702 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:13:03.995904  504702 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:13:04.411229  504702 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:13:04.995765  504702 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:13:04.996613  504702 kubeadm.go:319] 
	I1206 09:13:04.996683  504702 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:13:04.996698  504702 kubeadm.go:319] 
	I1206 09:13:04.996770  504702 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:13:04.996775  504702 kubeadm.go:319] 
	I1206 09:13:04.996796  504702 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:13:04.996847  504702 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:13:04.996919  504702 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:13:04.996938  504702 kubeadm.go:319] 
	I1206 09:13:04.997022  504702 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:13:04.997033  504702 kubeadm.go:319] 
	I1206 09:13:04.997125  504702 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:13:04.997143  504702 kubeadm.go:319] 
	I1206 09:13:04.997198  504702 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:13:04.997282  504702 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:13:04.997350  504702 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:13:04.997359  504702 kubeadm.go:319] 
	I1206 09:13:04.997429  504702 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:13:04.997538  504702 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:13:04.997547  504702 kubeadm.go:319] 
	I1206 09:13:04.997612  504702 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token umpzxk.mrukc1mpg3pqm1t5 \
	I1206 09:13:04.997698  504702 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:13:04.997716  504702 kubeadm.go:319] 	--control-plane 
	I1206 09:13:04.997720  504702 kubeadm.go:319] 
	I1206 09:13:04.997796  504702 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:13:04.997803  504702 kubeadm.go:319] 
	I1206 09:13:04.997904  504702 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token umpzxk.mrukc1mpg3pqm1t5 \
	I1206 09:13:04.998027  504702 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:13:05.000294  504702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:13:05.000421  504702 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:13:05.000443  504702 cni.go:84] Creating CNI manager for ""
	I1206 09:13:05.000452  504702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:13:05.001836  504702 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:13:05.002801  504702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:13:05.007028  504702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:13:05.007049  504702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:13:05.019904  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:13:05.225835  504702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:13:05.225961  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:05.225961  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-101630 minikube.k8s.io/updated_at=2025_12_06T09_13_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-101630 minikube.k8s.io/primary=true
	I1206 09:13:05.236182  504702 ops.go:34] apiserver oom_adj: -16
	I1206 09:13:05.320632  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:05.820993  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:06.321339  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:06.820777  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:07.321320  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:07.820998  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:08.320784  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:08.821366  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:08.885728  504702 kubeadm.go:1114] duration metric: took 3.659838522s to wait for elevateKubeSystemPrivileges
	I1206 09:13:08.885763  504702 kubeadm.go:403] duration metric: took 13.689800256s to StartCluster
	I1206 09:13:08.885780  504702 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:13:08.885882  504702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:13:08.886376  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:13:08.886604  504702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:13:08.886626  504702 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:13:08.886696  504702 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:13:08.886817  504702 addons.go:70] Setting default-storageclass=true in profile "addons-101630"
	I1206 09:13:08.886835  504702 addons.go:70] Setting yakd=true in profile "addons-101630"
	I1206 09:13:08.886857  504702 addons.go:239] Setting addon yakd=true in "addons-101630"
	I1206 09:13:08.886859  504702 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-101630"
	I1206 09:13:08.886877  504702 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:13:08.886891  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.886868  504702 addons.go:70] Setting cloud-spanner=true in profile "addons-101630"
	I1206 09:13:08.886914  504702 addons.go:239] Setting addon cloud-spanner=true in "addons-101630"
	I1206 09:13:08.886882  504702 addons.go:70] Setting metrics-server=true in profile "addons-101630"
	I1206 09:13:08.886930  504702 addons.go:70] Setting gcp-auth=true in profile "addons-101630"
	I1206 09:13:08.886937  504702 addons.go:70] Setting ingress-dns=true in profile "addons-101630"
	I1206 09:13:08.886948  504702 addons.go:239] Setting addon ingress-dns=true in "addons-101630"
	I1206 09:13:08.886954  504702 mustload.go:66] Loading cluster: addons-101630
	I1206 09:13:08.886957  504702 addons.go:239] Setting addon metrics-server=true in "addons-101630"
	I1206 09:13:08.886975  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.886975  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887008  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887099  504702 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-101630"
	I1206 09:13:08.887131  504702 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-101630"
	I1206 09:13:08.887169  504702 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-101630"
	I1206 09:13:08.887172  504702 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:13:08.887196  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887204  504702 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-101630"
	I1206 09:13:08.887240  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887324  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887440  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887517  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887534  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887548  504702 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-101630"
	I1206 09:13:08.887554  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887562  504702 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-101630"
	I1206 09:13:08.887585  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887721  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887764  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887534  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.888064  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.890103  504702 out.go:179] * Verifying Kubernetes components...
	I1206 09:13:08.888078  504702 addons.go:70] Setting registry=true in profile "addons-101630"
	I1206 09:13:08.890335  504702 addons.go:239] Setting addon registry=true in "addons-101630"
	I1206 09:13:08.890368  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.890962  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.888115  504702 addons.go:70] Setting registry-creds=true in profile "addons-101630"
	I1206 09:13:08.891197  504702 addons.go:239] Setting addon registry-creds=true in "addons-101630"
	I1206 09:13:08.891224  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.888123  504702 addons.go:70] Setting storage-provisioner=true in profile "addons-101630"
	I1206 09:13:08.891908  504702 addons.go:239] Setting addon storage-provisioner=true in "addons-101630"
	I1206 09:13:08.891944  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.892614  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.888132  504702 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-101630"
	I1206 09:13:08.895245  504702 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-101630"
	I1206 09:13:08.888139  504702 addons.go:70] Setting volcano=true in profile "addons-101630"
	I1206 09:13:08.895433  504702 addons.go:239] Setting addon volcano=true in "addons-101630"
	I1206 09:13:08.888150  504702 addons.go:70] Setting inspektor-gadget=true in profile "addons-101630"
	I1206 09:13:08.895650  504702 addons.go:239] Setting addon inspektor-gadget=true in "addons-101630"
	I1206 09:13:08.895681  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.888154  504702 addons.go:70] Setting volumesnapshots=true in profile "addons-101630"
	I1206 09:13:08.896018  504702 addons.go:239] Setting addon volumesnapshots=true in "addons-101630"
	I1206 09:13:08.896045  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.886922  504702 addons.go:70] Setting ingress=true in profile "addons-101630"
	I1206 09:13:08.896314  504702 addons.go:239] Setting addon ingress=true in "addons-101630"
	I1206 09:13:08.896367  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.905241  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.905767  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.905825  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.906184  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.907547  504702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:13:08.907707  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.908311  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.913581  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.953432  504702 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:13:08.953447  504702 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:13:08.953432  504702 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:13:08.955153  504702 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:13:08.955177  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:13:08.955271  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.955639  504702 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:13:08.955783  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:13:08.955811  504702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:13:08.955933  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.957538  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:13:08.957712  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:13:08.957734  504702 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:13:08.957794  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.958339  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.958614  504702 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:13:08.958963  504702 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:13:08.958978  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:13:08.959033  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.961341  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:13:08.961397  504702 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:13:08.962406  504702 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:13:08.962426  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:13:08.962523  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:13:08.962590  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.964598  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:13:08.965866  504702 addons.go:239] Setting addon default-storageclass=true in "addons-101630"
	I1206 09:13:08.965910  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.966151  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:13:08.966431  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.968440  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:13:08.968481  504702 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:13:08.969670  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:13:08.969702  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:13:08.969721  504702 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:13:08.969738  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:13:08.970788  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.971160  504702 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:13:08.971764  504702 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-101630"
	I1206 09:13:08.971809  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.972163  504702 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:13:08.972276  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.972292  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:13:08.972346  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.972703  504702 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:13:08.972717  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:13:08.972762  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.973571  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:13:08.974591  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:13:08.975485  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:13:08.975528  504702 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:13:08.976306  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:13:08.976339  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:13:08.976413  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.976799  504702 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:13:08.976813  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:13:08.976866  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.989091  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:13:08.989173  504702 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:13:08.990589  504702 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:13:08.990608  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:13:08.990669  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.990851  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:13:08.990863  504702 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:13:08.990920  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:09.008363  504702 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:13:09.012007  504702 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:13:09.012030  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:13:09.012100  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:09.022266  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.028061  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.031926  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.039729  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.041773  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	W1206 09:13:09.042286  504702 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 09:13:09.050891  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.055005  504702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:13:09.064960  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.069168  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.074621  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.077213  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.081285  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.084768  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.084867  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.084953  504702 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:13:09.085000  504702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:13:09.085061  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:09.086267  504702 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:13:09.087343  504702 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:13:09.088670  504702 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:13:09.088724  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:13:09.088788  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	W1206 09:13:09.095184  504702 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:13:09.095221  504702 retry.go:31] will retry after 188.294333ms: ssh: handshake failed: EOF
	I1206 09:13:09.118496  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	W1206 09:13:09.119603  504702 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:13:09.119844  504702 retry.go:31] will retry after 207.750917ms: ssh: handshake failed: EOF
	I1206 09:13:09.126277  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.135920  504702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:13:09.185061  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:13:09.185083  504702 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:13:09.200572  504702 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:13:09.200742  504702 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:13:09.201157  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:13:09.205022  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:13:09.205044  504702 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:13:09.214538  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:13:09.218369  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:13:09.218393  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:13:09.226049  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:13:09.226072  504702 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:13:09.230831  504702 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:13:09.230851  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:13:09.237965  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:13:09.242861  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:13:09.246358  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:13:09.246380  504702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:13:09.252272  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:13:09.252295  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:13:09.252853  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:13:09.257150  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:13:09.258035  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:13:09.265195  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:13:09.265216  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:13:09.270734  504702 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:13:09.270761  504702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:13:09.276664  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:13:09.280635  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:13:09.291641  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:13:09.291672  504702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:13:09.315741  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:13:09.322600  504702 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:13:09.322632  504702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:13:09.326831  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:13:09.326920  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:13:09.357107  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:13:09.376958  504702 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:13:09.376989  504702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:13:09.401692  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:13:09.401736  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:13:09.432256  504702 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 09:13:09.435211  504702 node_ready.go:35] waiting up to 6m0s for node "addons-101630" to be "Ready" ...
	I1206 09:13:09.437274  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:13:09.437298  504702 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:13:09.459050  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:13:09.459083  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:13:09.468978  504702 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:13:09.469006  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:13:09.537370  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:13:09.537403  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:13:09.540717  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:13:09.540782  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:13:09.548074  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:13:09.630806  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:13:09.630836  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:13:09.703914  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:13:09.703948  504702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:13:09.747159  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:13:09.747186  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:13:09.804407  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:13:09.804434  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:13:09.827919  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:13:09.828019  504702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:13:09.897753  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:13:09.941465  504702 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-101630" context rescaled to 1 replicas
	I1206 09:13:10.444119  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.229519742s)
	I1206 09:13:10.444161  504702 addons.go:495] Verifying addon ingress=true in "addons-101630"
	I1206 09:13:10.444284  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.190467153s)
	I1206 09:13:10.444342  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18717862s)
	I1206 09:13:10.444369  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.186282479s)
	I1206 09:13:10.444394  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.167708126s)
	I1206 09:13:10.444406  504702 addons.go:495] Verifying addon registry=true in "addons-101630"
	I1206 09:13:10.444544  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.163878s)
	I1206 09:13:10.444653  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.128878851s)
	I1206 09:13:10.444731  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.087510642s)
	I1206 09:13:10.444267  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.201386254s)
	I1206 09:13:10.444231  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.206230844s)
	I1206 09:13:10.445796  504702 addons.go:495] Verifying addon metrics-server=true in "addons-101630"
	I1206 09:13:10.446569  504702 out.go:179] * Verifying registry addon...
	I1206 09:13:10.446639  504702 out.go:179] * Verifying ingress addon...
	I1206 09:13:10.447438  504702 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-101630 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:13:10.449048  504702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:13:10.449610  504702 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:13:10.451927  504702 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:13:10.452060  504702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:13:10.452165  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1206 09:13:10.454506  504702 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 09:13:10.936404  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.388289386s)
	W1206 09:13:10.936451  504702 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:13:10.936555  504702 retry.go:31] will retry after 207.959161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:13:10.936683  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.038812463s)
	I1206 09:13:10.936723  504702 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-101630"
	I1206 09:13:10.938312  504702 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:13:10.940123  504702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:13:10.942883  504702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:13:10.942900  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:10.952179  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:10.952429  504702 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:13:10.952447  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:11.145432  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1206 09:13:11.438643  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:11.443335  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:11.451523  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:11.452750  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:11.944192  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:11.952344  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:11.952418  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:12.443926  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:12.452247  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:12.452380  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:12.943244  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:12.952818  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:12.952848  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:13.443088  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:13.452072  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:13.452250  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:13.636601  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.491123936s)
	W1206 09:13:13.938769  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:13.943507  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:13.951812  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:13.951854  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:14.443587  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:14.452042  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:14.452308  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:14.943255  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:14.951173  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:14.952392  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:15.443106  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:15.452555  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:15.452622  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1206 09:13:15.939561  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:15.943187  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:15.951196  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:15.952440  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:16.443789  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:16.451849  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:16.452086  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:16.575546  504702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:13:16.575611  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:16.593684  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:16.693305  504702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:13:16.706003  504702 addons.go:239] Setting addon gcp-auth=true in "addons-101630"
	I1206 09:13:16.706052  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:16.706391  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:16.724882  504702 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:13:16.724942  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:16.742677  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:16.834813  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:13:16.836322  504702 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:13:16.837598  504702 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:13:16.837616  504702 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:13:16.852035  504702 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:13:16.852061  504702 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:13:16.865112  504702 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:13:16.865134  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:13:16.878219  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:13:16.943078  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:16.952602  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:16.952810  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:17.186797  504702 addons.go:495] Verifying addon gcp-auth=true in "addons-101630"
	I1206 09:13:17.187963  504702 out.go:179] * Verifying gcp-auth addon...
	I1206 09:13:17.189865  504702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:13:17.196721  504702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:13:17.196743  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:17.443298  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:17.451271  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:17.452630  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:17.693690  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:17.942581  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:17.951656  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:17.951827  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:18.193268  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 09:13:18.438422  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:18.442654  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:18.451787  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:18.452047  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:18.692890  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:18.943299  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:18.951347  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:18.952564  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:19.193679  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:19.443195  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:19.451065  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:19.452363  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:19.693350  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:19.943309  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:19.951360  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:19.952599  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.193857  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 09:13:20.438926  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:20.443751  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:20.451996  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:20.452096  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.693136  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:20.943836  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:20.952405  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.952409  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.193445  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:21.437822  504702 node_ready.go:49] node "addons-101630" is "Ready"
	I1206 09:13:21.437850  504702 node_ready.go:38] duration metric: took 12.002606923s for node "addons-101630" to be "Ready" ...
	I1206 09:13:21.437866  504702 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:13:21.437914  504702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:13:21.442623  504702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:13:21.442652  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:21.451450  504702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:13:21.451498  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.451811  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:21.455397  504702 api_server.go:72] duration metric: took 12.568738114s to wait for apiserver process to appear ...
	I1206 09:13:21.455422  504702 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:13:21.455448  504702 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 09:13:21.459714  504702 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 09:13:21.460502  504702 api_server.go:141] control plane version: v1.34.2
	I1206 09:13:21.460526  504702 api_server.go:131] duration metric: took 5.096611ms to wait for apiserver health ...
	I1206 09:13:21.460536  504702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:13:21.466052  504702 system_pods.go:59] 20 kube-system pods found
	I1206 09:13:21.466087  504702 system_pods.go:61] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending
	I1206 09:13:21.466100  504702 system_pods.go:61] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:21.466107  504702 system_pods.go:61] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending
	I1206 09:13:21.466115  504702 system_pods.go:61] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending
	I1206 09:13:21.466125  504702 system_pods.go:61] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending
	I1206 09:13:21.466130  504702 system_pods.go:61] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:21.466140  504702 system_pods.go:61] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:21.466145  504702 system_pods.go:61] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:21.466151  504702 system_pods.go:61] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:21.466159  504702 system_pods.go:61] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending
	I1206 09:13:21.466165  504702 system_pods.go:61] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:21.466172  504702 system_pods.go:61] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:21.466180  504702 system_pods.go:61] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:21.466190  504702 system_pods.go:61] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending
	I1206 09:13:21.466199  504702 system_pods.go:61] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:21.466207  504702 system_pods.go:61] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:21.466216  504702 system_pods.go:61] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending
	I1206 09:13:21.466225  504702 system_pods.go:61] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending
	I1206 09:13:21.466236  504702 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.466247  504702 system_pods.go:61] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:21.466259  504702 system_pods.go:74] duration metric: took 5.714323ms to wait for pod list to return data ...
	I1206 09:13:21.466272  504702 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:13:21.468290  504702 default_sa.go:45] found service account: "default"
	I1206 09:13:21.468313  504702 default_sa.go:55] duration metric: took 2.031273ms for default service account to be created ...
	I1206 09:13:21.468323  504702 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:13:21.471683  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:21.471712  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending
	I1206 09:13:21.471724  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:21.471730  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending
	I1206 09:13:21.471742  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending
	I1206 09:13:21.471748  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending
	I1206 09:13:21.471755  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:21.471762  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:21.471774  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:21.471780  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:21.471787  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending
	I1206 09:13:21.471797  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:21.471803  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:21.471811  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:21.471816  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending
	I1206 09:13:21.471826  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:21.471836  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:21.471843  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending
	I1206 09:13:21.471848  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending
	I1206 09:13:21.471857  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.471864  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:21.471884  504702 retry.go:31] will retry after 259.820638ms: missing components: kube-dns
	I1206 09:13:21.694200  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:21.800817  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:21.800862  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:21.800874  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:21.800886  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:13:21.800894  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:13:21.800903  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:13:21.800909  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:21.800915  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:21.800922  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:21.800928  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:21.800937  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:21.800943  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:21.800950  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:21.800957  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:21.800965  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:21.800972  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:21.800979  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:21.800986  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:21.800997  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.801007  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.801016  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:21.801039  504702 retry.go:31] will retry after 247.83868ms: missing components: kube-dns
	I1206 09:13:21.944568  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:21.951919  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.953182  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:22.053802  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:22.053841  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:22.053852  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:22.053862  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:13:22.053872  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:13:22.053899  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:13:22.053913  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:22.053922  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:22.053931  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:22.053939  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:22.053952  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:22.053960  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:22.053966  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:22.053975  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:22.053984  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:22.053994  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:22.054008  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:22.054021  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:22.054039  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.054051  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.054059  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:22.054086  504702 retry.go:31] will retry after 330.491691ms: missing components: kube-dns
	I1206 09:13:22.194651  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:22.391494  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:22.391533  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:22.391539  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Running
	I1206 09:13:22.391548  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:13:22.391554  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:13:22.391564  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:13:22.391571  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:22.391577  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:22.391583  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:22.391593  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:22.391603  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:22.391612  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:22.391618  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:22.391629  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:22.391634  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:22.391642  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:22.391651  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:22.391660  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:22.391675  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.391683  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.391687  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Running
	I1206 09:13:22.391696  504702 system_pods.go:126] duration metric: took 923.366501ms to wait for k8s-apps to be running ...
	I1206 09:13:22.391707  504702 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:13:22.391751  504702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:13:22.410633  504702 system_svc.go:56] duration metric: took 18.91386ms WaitForService to wait for kubelet
	I1206 09:13:22.410669  504702 kubeadm.go:587] duration metric: took 13.524014508s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:13:22.410694  504702 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:13:22.413826  504702 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:13:22.413856  504702 node_conditions.go:123] node cpu capacity is 8
	I1206 09:13:22.413877  504702 node_conditions.go:105] duration metric: took 3.176509ms to run NodePressure ...
	I1206 09:13:22.413892  504702 start.go:242] waiting for startup goroutines ...
	I1206 09:13:22.490934  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:22.491011  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:22.491241  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:22.693513  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:22.943545  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:22.951721  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:22.952847  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:23.193869  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:23.444310  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:23.451432  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:23.452590  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:23.693318  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:23.943956  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:23.952122  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:23.952414  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:24.193358  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:24.443830  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:24.453848  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:24.453916  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:24.693801  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:24.944196  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:24.952918  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:24.952977  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:25.193729  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:25.444435  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:25.451673  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:25.452725  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:25.694323  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:25.944418  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:25.952083  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:25.952890  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:26.193088  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:26.444708  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:26.452126  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:26.452323  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:26.692830  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:26.943932  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:26.952310  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:26.952428  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:27.193153  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:27.443599  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:27.452450  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:27.452894  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:27.693229  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:27.943625  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:27.951858  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:27.951956  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:28.192919  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:28.444371  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:28.451561  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:28.452594  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:28.693399  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:28.943699  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:28.951958  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:28.952019  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.194549  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:29.444165  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:29.452859  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.452951  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:29.693123  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:29.956938  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:29.956961  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.957073  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.193395  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:30.444467  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.452174  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:30.452805  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:30.694118  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:30.944064  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.952535  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:30.952553  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:31.193442  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:31.444209  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:31.452723  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:31.452901  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:31.692762  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:31.944793  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:31.952500  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:31.952503  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:32.193531  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:32.443670  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:32.452197  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:32.452947  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:32.692581  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:32.943699  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:32.951828  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:32.952009  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:33.192632  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:33.444049  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:33.452570  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:33.452720  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:33.693626  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:33.944239  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:33.951313  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:33.952346  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:34.193853  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:34.444660  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:34.452200  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:34.452221  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:34.693834  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:34.945096  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:34.952995  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:34.953040  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:35.194994  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:35.444504  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:35.452417  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:35.453131  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:35.693632  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:35.944119  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:35.952814  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:35.952938  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:36.193943  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:36.444351  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:36.451756  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:36.452718  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:36.693353  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:36.943453  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:36.952189  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:36.953051  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:37.192748  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:37.444103  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:37.454666  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:37.455279  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:37.694338  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:37.945361  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:37.952626  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:37.952828  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:38.193939  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:38.444681  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:38.452705  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:38.453051  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:38.694017  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:38.944951  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:38.952744  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:38.953296  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:39.193760  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:39.443504  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:39.452375  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:39.452868  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:39.694196  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:39.947405  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:39.951972  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:39.953140  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:40.193549  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:40.444253  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:40.453324  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:40.453352  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:40.694087  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:40.944930  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:40.953342  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:40.953352  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:41.193703  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:41.444442  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:41.452300  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:41.453136  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:41.693662  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:41.944517  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:41.952315  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:41.952949  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.193971  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:42.446257  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:42.452816  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:42.452924  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.694671  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:42.943740  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:42.951835  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.951902  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:43.193600  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:43.443393  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:43.452925  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:43.452948  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:43.693629  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:43.943425  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:43.951355  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:43.952378  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.193274  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:44.443010  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:44.452292  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:44.452316  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.692989  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:44.944527  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:44.951855  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.951878  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:45.192852  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:45.444398  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:45.452910  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:45.452909  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:45.692671  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:45.943707  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:45.952998  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:45.953025  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:46.194743  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:46.444153  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:46.452613  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:46.452633  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:46.693550  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:46.943416  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:46.951563  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:46.952674  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:47.194639  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:47.443622  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:47.451945  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:47.452974  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:47.692809  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:47.944249  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.044528  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:48.044796  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:48.193409  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:48.443362  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.451923  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:48.452723  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:48.693924  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:48.944692  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.952151  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:48.952190  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:49.193730  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:49.444099  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:49.452332  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:49.452394  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:49.693469  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:49.944065  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:49.952605  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:49.952804  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:50.194754  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:50.444374  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:50.452917  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:50.452976  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:50.693257  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:50.944132  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:50.952939  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:50.953086  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:51.193262  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:51.443754  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:51.452322  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:51.452577  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:51.694044  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:51.944116  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:51.952829  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:51.952908  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:52.193389  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:52.443694  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:52.452185  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:52.452219  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:52.693602  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:52.943630  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:52.951787  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:52.951819  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:53.193053  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:53.444646  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:53.451765  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:53.451824  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:53.693439  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:53.944046  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:53.952730  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:53.952773  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:54.193861  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:54.444577  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:54.452273  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:54.453229  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:54.693294  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:54.943396  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:54.951493  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:54.952604  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:55.193907  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:55.444620  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:55.452337  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:55.453012  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:55.693102  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:55.943528  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:55.952047  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:55.952065  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:56.193398  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:56.443600  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:56.451884  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:56.452783  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:56.693435  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:56.945086  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:56.953244  504702 kapi.go:107] duration metric: took 46.504193263s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:13:56.953511  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:57.193109  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:57.444105  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:57.452648  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:57.694598  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:57.944945  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:57.952321  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:58.194177  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:58.444723  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:58.453420  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:58.693418  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:58.944095  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:58.952378  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:59.197362  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:59.444037  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:59.452811  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:59.695038  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:59.944744  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:59.953575  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:00.194789  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:00.444297  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:00.453132  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:00.693641  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:00.944607  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:00.953617  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:01.193306  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:01.444264  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:01.453269  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:01.693907  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:01.943920  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:01.952987  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:02.194832  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:02.444376  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:02.452357  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:02.694272  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:02.943887  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:02.952762  504702 kapi.go:107] duration metric: took 52.503147826s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:14:03.194141  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:03.443269  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:03.786520  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:03.943756  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:04.194017  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:04.444253  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:04.692855  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:04.944490  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:05.193557  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:05.443887  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:05.693836  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:05.944957  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:06.246780  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:06.444070  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:06.693115  504702 kapi.go:107] duration metric: took 49.503243952s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:14:06.694645  504702 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-101630 cluster.
	I1206 09:14:06.695751  504702 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:14:06.696826  504702 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:14:06.944403  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:07.443978  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:07.943240  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:08.444492  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:08.944380  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:09.443836  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:09.943626  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:10.443848  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:10.944340  504702 kapi.go:107] duration metric: took 1m0.004213562s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:14:10.946040  504702 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, registry-creds, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 09:14:10.947017  504702 addons.go:530] duration metric: took 1m2.060322172s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner registry-creds inspektor-gadget cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 09:14:10.947060  504702 start.go:247] waiting for cluster config update ...
	I1206 09:14:10.947088  504702 start.go:256] writing updated cluster config ...
	I1206 09:14:10.947371  504702 ssh_runner.go:195] Run: rm -f paused
	I1206 09:14:10.951517  504702 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:14:10.954850  504702 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kwpl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.959318  504702 pod_ready.go:94] pod "coredns-66bc5c9577-kwpl7" is "Ready"
	I1206 09:14:10.959345  504702 pod_ready.go:86] duration metric: took 4.474513ms for pod "coredns-66bc5c9577-kwpl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.961193  504702 pod_ready.go:83] waiting for pod "etcd-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.964946  504702 pod_ready.go:94] pod "etcd-addons-101630" is "Ready"
	I1206 09:14:10.964968  504702 pod_ready.go:86] duration metric: took 3.754535ms for pod "etcd-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.966776  504702 pod_ready.go:83] waiting for pod "kube-apiserver-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.970154  504702 pod_ready.go:94] pod "kube-apiserver-addons-101630" is "Ready"
	I1206 09:14:10.970172  504702 pod_ready.go:86] duration metric: took 3.377753ms for pod "kube-apiserver-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.971944  504702 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:11.356077  504702 pod_ready.go:94] pod "kube-controller-manager-addons-101630" is "Ready"
	I1206 09:14:11.356105  504702 pod_ready.go:86] duration metric: took 384.143807ms for pod "kube-controller-manager-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:11.556507  504702 pod_ready.go:83] waiting for pod "kube-proxy-tnjbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:11.956232  504702 pod_ready.go:94] pod "kube-proxy-tnjbc" is "Ready"
	I1206 09:14:11.956263  504702 pod_ready.go:86] duration metric: took 399.722574ms for pod "kube-proxy-tnjbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:12.155510  504702 pod_ready.go:83] waiting for pod "kube-scheduler-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:12.554922  504702 pod_ready.go:94] pod "kube-scheduler-addons-101630" is "Ready"
	I1206 09:14:12.554955  504702 pod_ready.go:86] duration metric: took 399.414409ms for pod "kube-scheduler-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:12.554971  504702 pod_ready.go:40] duration metric: took 1.603415142s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:14:12.609065  504702 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:14:12.610568  504702 out.go:179] * Done! kubectl is now configured to use "addons-101630" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:15:38 addons-101630 crio[770]: time="2025-12-06T09:15:38.303972131Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-qrdwx/registry-creds" id=16fed3d4-312a-4bca-b492-e81218f09778 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:15:38 addons-101630 crio[770]: time="2025-12-06T09:15:38.304084262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:15:38 addons-101630 crio[770]: time="2025-12-06T09:15:38.309608048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:15:38 addons-101630 crio[770]: time="2025-12-06T09:15:38.310249845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:15:38 addons-101630 crio[770]: time="2025-12-06T09:15:38.339153837Z" level=info msg="Created container 954acc79626b04eab60c4a9a5d092768b7731f53f0a32e1a66caa04e1bce991f: kube-system/registry-creds-764b6fb674-qrdwx/registry-creds" id=16fed3d4-312a-4bca-b492-e81218f09778 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:15:38 addons-101630 crio[770]: time="2025-12-06T09:15:38.339803636Z" level=info msg="Starting container: 954acc79626b04eab60c4a9a5d092768b7731f53f0a32e1a66caa04e1bce991f" id=a536ab9a-0202-4b16-8729-da075f090e53 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:15:38 addons-101630 crio[770]: time="2025-12-06T09:15:38.341507595Z" level=info msg="Started container" PID=8886 containerID=954acc79626b04eab60c4a9a5d092768b7731f53f0a32e1a66caa04e1bce991f description=kube-system/registry-creds-764b6fb674-qrdwx/registry-creds id=a536ab9a-0202-4b16-8729-da075f090e53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6bebe5cec7f6da18fe6b771ca2e2604e96e7d4823bbcb94f0d133aef147b5656
	Dec 06 09:16:04 addons-101630 crio[770]: time="2025-12-06T09:16:04.297244858Z" level=info msg="Stopping pod sandbox: 6a88600ed97a8348a71f983cec9fdd558d27be84b95a6b8642f5f1d74e6cb726" id=4baee7b9-8327-47d6-8006-d75bb81300d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 09:16:04 addons-101630 crio[770]: time="2025-12-06T09:16:04.29730851Z" level=info msg="Stopped pod sandbox (already stopped): 6a88600ed97a8348a71f983cec9fdd558d27be84b95a6b8642f5f1d74e6cb726" id=4baee7b9-8327-47d6-8006-d75bb81300d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 09:16:04 addons-101630 crio[770]: time="2025-12-06T09:16:04.297656954Z" level=info msg="Removing pod sandbox: 6a88600ed97a8348a71f983cec9fdd558d27be84b95a6b8642f5f1d74e6cb726" id=9f13a7b9-89eb-4572-98a3-d1aaed376049 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 06 09:16:04 addons-101630 crio[770]: time="2025-12-06T09:16:04.300871553Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:16:04 addons-101630 crio[770]: time="2025-12-06T09:16:04.300927419Z" level=info msg="Removed pod sandbox: 6a88600ed97a8348a71f983cec9fdd558d27be84b95a6b8642f5f1d74e6cb726" id=9f13a7b9-89eb-4572-98a3-d1aaed376049 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.204984284Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-95r6g/POD" id=69d707a6-7793-4261-a33e-3a0b3712b007 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.205057961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.211742782Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-95r6g Namespace:default ID:b7c64476cd885a47727a293c2bf66063f179a8d437cefafdae9b0ec9af6c56e3 UID:711f4444-9e74-4668-90a4-3f50df84110a NetNS:/var/run/netns/2618ebbc-1a02-4d94-a035-7488c3e77ce1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00061e660}] Aliases:map[]}"
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.211785139Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-95r6g to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.222536998Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-95r6g Namespace:default ID:b7c64476cd885a47727a293c2bf66063f179a8d437cefafdae9b0ec9af6c56e3 UID:711f4444-9e74-4668-90a4-3f50df84110a NetNS:/var/run/netns/2618ebbc-1a02-4d94-a035-7488c3e77ce1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00061e660}] Aliases:map[]}"
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.222714918Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-95r6g for CNI network kindnet (type=ptp)"
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.223666622Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.224689375Z" level=info msg="Ran pod sandbox b7c64476cd885a47727a293c2bf66063f179a8d437cefafdae9b0ec9af6c56e3 with infra container: default/hello-world-app-5d498dc89-95r6g/POD" id=69d707a6-7793-4261-a33e-3a0b3712b007 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.226118294Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5328919c-cb6f-41b1-8fe4-1ea9bb51ac38 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.226263198Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5328919c-cb6f-41b1-8fe4-1ea9bb51ac38 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.226325074Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=5328919c-cb6f-41b1-8fe4-1ea9bb51ac38 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.227140308Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=39f7b57a-5b65-40df-a2a3-026a8295591f name=/runtime.v1.ImageService/PullImage
	Dec 06 09:17:02 addons-101630 crio[770]: time="2025-12-06T09:17:02.231960316Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	954acc79626b0       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   6bebe5cec7f6d       registry-creds-764b6fb674-qrdwx            kube-system
	b18f347abba10       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   c8e6bc70ba86e       nginx                                      default
	528ade097099b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   d0b2980eae5ed       busybox                                    default
	48412b93386c3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	b43a181098b64       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	0efcf1711c0c1       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	f53e5b7b950e3       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	a97bcb7cf0006       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   05eee4163c54c       gcp-auth-78565c9fb4-hrdcs                  gcp-auth
	953fb247031e3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	027fd0861c8a3       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             3 minutes ago        Running             controller                               0                   ab245d0b98cb1       ingress-nginx-controller-6c8bf45fb-d2mvt   ingress-nginx
	e5e071c542354       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            3 minutes ago        Running             gadget                                   0                   80c0409bac441       gadget-qs5wx                               gadget
	79b2f00dfcfb1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   747b39152df54       registry-proxy-cdw5g                       kube-system
	fb02c57fd629b       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   01b5606e78051       csi-hostpath-resizer-0                     kube-system
	696827076a771       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	7a4130788df8e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   0fb6a894912ae       amd-gpu-device-plugin-hz4j9                kube-system
	8b6f64e34b32c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   103e8ff330d55       snapshot-controller-7d9fbc56b8-rc4h2       kube-system
	41fc749cc8817       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   a4ccc80d877b0       nvidia-device-plugin-daemonset-lv6tv       kube-system
	e27ecbcda3b56       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   317b469c32e7e       snapshot-controller-7d9fbc56b8-99cb8       kube-system
	fc9564c451d5d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   641ab723d81f3       csi-hostpath-attacher-0                    kube-system
	37ca696aec063       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             3 minutes ago        Exited              patch                                    2                   c386a25fc3e80       ingress-nginx-admission-patch-6zxgf        ingress-nginx
	7098dc77bd42b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   afbb0dcb871c0       kube-ingress-dns-minikube                  kube-system
	6556d8ce037db       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   80815c703d97e       ingress-nginx-admission-create-ssqfv       ingress-nginx
	c30bb6e013b15       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   9ce2b593c7da9       cloud-spanner-emulator-5bdddb765-b2jhf     default
	09004bd8456e2       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   2b618017f3e82       yakd-dashboard-5ff678cb9-pp9k4             yakd-dashboard
	303330f3ac4f5       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   aa2beeffc3a81       local-path-provisioner-648f6765c9-wlsqc    local-path-storage
	b07cd0b15477a       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   3b14a0fa4acb7       registry-6b586f9694-qh5nl                  kube-system
	3fb8bd4648004       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   79a82da8e42af       metrics-server-85b7d694d7-gj9kl            kube-system
	7324c334d61b7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   82b452159f888       coredns-66bc5c9577-kwpl7                   kube-system
	fc93539bfb63a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   aa5b7f5a3196f       storage-provisioner                        kube-system
	9ac221cf3f54d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago        Running             kube-proxy                               0                   0aeb135ff6f31       kube-proxy-tnjbc                           kube-system
	b12a294179793       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago        Running             kindnet-cni                              0                   29402f6cde7e7       kindnet-j6wfg                              kube-system
	6965300427d3a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago        Running             kube-scheduler                           0                   75dc39cbbff24       kube-scheduler-addons-101630               kube-system
	a89417715572b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago        Running             kube-apiserver                           0                   0368667584c60       kube-apiserver-addons-101630               kube-system
	d16ba02709126       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago        Running             kube-controller-manager                  0                   a1cf8c7e9fc02       kube-controller-manager-addons-101630      kube-system
	3b636fcb6c702       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago        Running             etcd                                     0                   0f8273db6d719       etcd-addons-101630                         kube-system
	
	
	==> coredns [7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192] <==
	[INFO] 10.244.0.22:51554 - 6704 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016081s
	[INFO] 10.244.0.22:54124 - 49175 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.007034347s
	[INFO] 10.244.0.22:38539 - 35111 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.008098558s
	[INFO] 10.244.0.22:59178 - 62926 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006869018s
	[INFO] 10.244.0.22:36716 - 28006 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007666058s
	[INFO] 10.244.0.22:34393 - 64571 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005629973s
	[INFO] 10.244.0.22:51183 - 62065 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007680924s
	[INFO] 10.244.0.22:54470 - 53121 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001958057s
	[INFO] 10.244.0.22:42103 - 6854 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001019617s
	[INFO] 10.244.0.25:55542 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000281156s
	[INFO] 10.244.0.25:32944 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000184145s
	[INFO] 10.244.0.31:36932 - 36525 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000257597s
	[INFO] 10.244.0.31:57848 - 25710 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000345689s
	[INFO] 10.244.0.31:35925 - 22002 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000144036s
	[INFO] 10.244.0.31:51052 - 9364 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000163105s
	[INFO] 10.244.0.31:42318 - 2634 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000111035s
	[INFO] 10.244.0.31:56209 - 39567 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000138185s
	[INFO] 10.244.0.31:42021 - 62452 "AAAA IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005834811s
	[INFO] 10.244.0.31:40784 - 18461 "A IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.00592391s
	[INFO] 10.244.0.31:37022 - 38945 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004347514s
	[INFO] 10.244.0.31:48507 - 40314 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004708833s
	[INFO] 10.244.0.31:39419 - 13006 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003891192s
	[INFO] 10.244.0.31:41544 - 46823 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006611724s
	[INFO] 10.244.0.31:41053 - 26191 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001449375s
	[INFO] 10.244.0.31:36762 - 3714 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001754724s
	
	
	==> describe nodes <==
	Name:               addons-101630
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-101630
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-101630
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-101630
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-101630"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101630
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:16:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:16:08 +0000   Sat, 06 Dec 2025 09:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:16:08 +0000   Sat, 06 Dec 2025 09:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:16:08 +0000   Sat, 06 Dec 2025 09:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:16:08 +0000   Sat, 06 Dec 2025 09:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-101630
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                c15a275c-2a6b-449e-a9b4-51c1acabce68
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     cloud-spanner-emulator-5bdddb765-b2jhf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  default                     hello-world-app-5d498dc89-95r6g             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-qs5wx                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  gcp-auth                    gcp-auth-78565c9fb4-hrdcs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-d2mvt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m53s
	  kube-system                 amd-gpu-device-plugin-hz4j9                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 coredns-66bc5c9577-kwpl7                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m53s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 csi-hostpathplugin-d4rl2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-addons-101630                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m59s
	  kube-system                 kindnet-j6wfg                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m54s
	  kube-system                 kube-apiserver-addons-101630                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-controller-manager-addons-101630       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 kube-proxy-tnjbc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-scheduler-addons-101630                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 metrics-server-85b7d694d7-gj9kl             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m53s
	  kube-system                 nvidia-device-plugin-daemonset-lv6tv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 registry-6b586f9694-qh5nl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 registry-creds-764b6fb674-qrdwx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 registry-proxy-cdw5g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 snapshot-controller-7d9fbc56b8-99cb8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 snapshot-controller-7d9fbc56b8-rc4h2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  local-path-storage          local-path-provisioner-648f6765c9-wlsqc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pp9k4              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m52s  kube-proxy       
	  Normal  Starting                 3m59s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s  kubelet          Node addons-101630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s  kubelet          Node addons-101630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s  kubelet          Node addons-101630 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m55s  node-controller  Node addons-101630 event: Registered Node addons-101630 in Controller
	  Normal  NodeReady                3m42s  kubelet          Node addons-101630 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f] <==
	{"level":"warn","ts":"2025-12-06T09:13:01.248424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.254701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.262602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.272562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.279874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.286189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.293849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.300603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.308033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.315761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.322572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.328632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.335600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.343158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.349814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.366317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.372953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.379877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.430028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:11.382478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:11.389099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.831314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.842921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.857778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.866030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a97bcb7cf000697c1f5740ab1cdc12265652a631bd71b13b9889a5f102ca22c8] <==
	2025/12/06 09:14:06 GCP Auth Webhook started!
	2025/12/06 09:14:12 Ready to marshal response ...
	2025/12/06 09:14:12 Ready to write response ...
	2025/12/06 09:14:13 Ready to marshal response ...
	2025/12/06 09:14:13 Ready to write response ...
	2025/12/06 09:14:13 Ready to marshal response ...
	2025/12/06 09:14:13 Ready to write response ...
	2025/12/06 09:14:28 Ready to marshal response ...
	2025/12/06 09:14:28 Ready to write response ...
	2025/12/06 09:14:28 Ready to marshal response ...
	2025/12/06 09:14:28 Ready to write response ...
	2025/12/06 09:14:32 Ready to marshal response ...
	2025/12/06 09:14:32 Ready to write response ...
	2025/12/06 09:14:38 Ready to marshal response ...
	2025/12/06 09:14:38 Ready to write response ...
	2025/12/06 09:14:39 Ready to marshal response ...
	2025/12/06 09:14:39 Ready to write response ...
	2025/12/06 09:14:40 Ready to marshal response ...
	2025/12/06 09:14:40 Ready to write response ...
	2025/12/06 09:15:05 Ready to marshal response ...
	2025/12/06 09:15:05 Ready to write response ...
	2025/12/06 09:17:01 Ready to marshal response ...
	2025/12/06 09:17:01 Ready to write response ...
	
	
	==> kernel <==
	 09:17:03 up  1:59,  0 user,  load average: 0.69, 1.22, 13.45
	Linux addons-101630 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56] <==
	I1206 09:15:00.935617       1 main.go:301] handling current node
	I1206 09:15:10.927357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:15:10.927387       1 main.go:301] handling current node
	I1206 09:15:20.929485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:15:20.929528       1 main.go:301] handling current node
	I1206 09:15:30.928140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:15:30.928183       1 main.go:301] handling current node
	I1206 09:15:40.928568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:15:40.928605       1 main.go:301] handling current node
	I1206 09:15:50.931572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:15:50.931605       1 main.go:301] handling current node
	I1206 09:16:00.928084       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:16:00.928119       1 main.go:301] handling current node
	I1206 09:16:10.928312       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:16:10.928362       1 main.go:301] handling current node
	I1206 09:16:20.928097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:16:20.928183       1 main.go:301] handling current node
	I1206 09:16:30.936199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:16:30.936254       1 main.go:301] handling current node
	I1206 09:16:40.936351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:16:40.936389       1 main.go:301] handling current node
	I1206 09:16:50.934010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:16:50.934046       1 main.go:301] handling current node
	I1206 09:17:00.934269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:17:00.934311       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b] <==
	W1206 09:13:21.361510       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.149.170:443: connect: connection refused
	E1206 09:13:21.361550       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.149.170:443: connect: connection refused" logger="UnhandledError"
	W1206 09:13:21.368656       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.149.170:443: connect: connection refused
	E1206 09:13:21.368703       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.149.170:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.348480       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	W1206 09:13:24.348522       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 09:13:24.348591       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 09:13:24.348848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.354877       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.375644       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.416852       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	I1206 09:13:24.523413       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 09:13:38.831307       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1206 09:13:38.842954       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 09:13:38.857718       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 09:13:38.865977       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1206 09:14:22.319173       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35032: use of closed network connection
	E1206 09:14:22.469729       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35062: use of closed network connection
	I1206 09:14:38.368442       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 09:14:38.544798       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.103.123"}
	I1206 09:14:49.880889       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 09:17:01.971429       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.228.196"}
	
	
	==> kube-controller-manager [d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add] <==
	I1206 09:13:08.813045       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:13:08.813112       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:13:08.813200       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:13:08.813229       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:13:08.813306       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:13:08.813617       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:13:08.813638       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:13:08.813685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:13:08.813715       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:13:08.813735       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:13:08.815436       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:13:08.816659       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:13:08.817980       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:13:08.818014       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:08.818018       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:13:08.818039       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:13:08.822528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:13:08.833208       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 09:13:10.176266       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1206 09:13:23.765910       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1206 09:13:38.823656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 09:13:38.823735       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 09:13:38.846011       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 09:13:38.924202       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:38.946431       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d] <==
	I1206 09:13:10.525825       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:13:10.612283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:13:10.713419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:13:10.713513       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:13:10.713617       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:13:10.741353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:13:10.741427       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:13:10.747176       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:13:10.747672       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:13:10.747722       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:10.749064       1 config.go:200] "Starting service config controller"
	I1206 09:13:10.749160       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:13:10.749090       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:13:10.749276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:13:10.749162       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:13:10.749378       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:13:10.749232       1 config.go:309] "Starting node config controller"
	I1206 09:13:10.749499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:13:10.749509       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:13:10.849478       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:13:10.849511       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:13:10.849449       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480] <==
	E1206 09:13:01.831340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:13:01.831503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:13:01.831507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:13:01.831527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:13:01.831682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:13:01.831770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:13:01.831866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:13:01.831871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:13:01.831870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:13:01.831946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:13:01.831989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:13:01.832028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:13:01.832028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:13:02.662642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:13:02.673647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:13:02.706865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:13:02.794420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:13:02.798492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:13:02.802373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:13:02.879227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:13:02.931891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:13:02.986153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:13:03.012309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:13:03.091092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:13:05.225890       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.565602    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cc277a42-6eb0-494d-bf16-45d0456b9cf2-gcp-creds\") pod \"cc277a42-6eb0-494d-bf16-45d0456b9cf2\" (UID: \"cc277a42-6eb0-494d-bf16-45d0456b9cf2\") "
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.565724    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc277a42-6eb0-494d-bf16-45d0456b9cf2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "cc277a42-6eb0-494d-bf16-45d0456b9cf2" (UID: "cc277a42-6eb0-494d-bf16-45d0456b9cf2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.565761    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0d6117c5-d284-11f0-8d8e-4662b82523c3\") pod \"cc277a42-6eb0-494d-bf16-45d0456b9cf2\" (UID: \"cc277a42-6eb0-494d-bf16-45d0456b9cf2\") "
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.565817    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whx5\" (UniqueName: \"kubernetes.io/projected/cc277a42-6eb0-494d-bf16-45d0456b9cf2-kube-api-access-5whx5\") pod \"cc277a42-6eb0-494d-bf16-45d0456b9cf2\" (UID: \"cc277a42-6eb0-494d-bf16-45d0456b9cf2\") "
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.565979    1295 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cc277a42-6eb0-494d-bf16-45d0456b9cf2-gcp-creds\") on node \"addons-101630\" DevicePath \"\""
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.568097    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc277a42-6eb0-494d-bf16-45d0456b9cf2-kube-api-access-5whx5" (OuterVolumeSpecName: "kube-api-access-5whx5") pod "cc277a42-6eb0-494d-bf16-45d0456b9cf2" (UID: "cc277a42-6eb0-494d-bf16-45d0456b9cf2"). InnerVolumeSpecName "kube-api-access-5whx5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.568964    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^0d6117c5-d284-11f0-8d8e-4662b82523c3" (OuterVolumeSpecName: "task-pv-storage") pod "cc277a42-6eb0-494d-bf16-45d0456b9cf2" (UID: "cc277a42-6eb0-494d-bf16-45d0456b9cf2"). InnerVolumeSpecName "pvc-0dc697a7-22ae-4145-93e5-e8a667689909". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.667364    1295 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0dc697a7-22ae-4145-93e5-e8a667689909\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0d6117c5-d284-11f0-8d8e-4662b82523c3\") on node \"addons-101630\" "
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.667403    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5whx5\" (UniqueName: \"kubernetes.io/projected/cc277a42-6eb0-494d-bf16-45d0456b9cf2-kube-api-access-5whx5\") on node \"addons-101630\" DevicePath \"\""
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.672574    1295 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-0dc697a7-22ae-4145-93e5-e8a667689909" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^0d6117c5-d284-11f0-8d8e-4662b82523c3") on node "addons-101630"
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.768051    1295 reconciler_common.go:299] "Volume detached for volume \"pvc-0dc697a7-22ae-4145-93e5-e8a667689909\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0d6117c5-d284-11f0-8d8e-4662b82523c3\") on node \"addons-101630\" DevicePath \"\""
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.868097    1295 scope.go:117] "RemoveContainer" containerID="5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32"
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.877126    1295 scope.go:117] "RemoveContainer" containerID="5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32"
	Dec 06 09:15:13 addons-101630 kubelet[1295]: E1206 09:15:13.877542    1295 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32\": container with ID starting with 5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32 not found: ID does not exist" containerID="5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32"
	Dec 06 09:15:13 addons-101630 kubelet[1295]: I1206 09:15:13.877582    1295 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32"} err="failed to get container status \"5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32\": rpc error: code = NotFound desc = could not find container \"5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32\": container with ID starting with 5c1c4633754f4ac5ebe373101a8ad19e4130c99bd06cc006f3c5a9f947622e32 not found: ID does not exist"
	Dec 06 09:15:14 addons-101630 kubelet[1295]: I1206 09:15:14.239300    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc277a42-6eb0-494d-bf16-45d0456b9cf2" path="/var/lib/kubelet/pods/cc277a42-6eb0-494d-bf16-45d0456b9cf2/volumes"
	Dec 06 09:15:24 addons-101630 kubelet[1295]: I1206 09:15:24.237468    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cdw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:15:24 addons-101630 kubelet[1295]: E1206 09:15:24.351392    1295 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-qrdwx" podUID="3bf9406e-6469-4c0a-b3d1-35797ae72deb"
	Dec 06 09:15:53 addons-101630 kubelet[1295]: I1206 09:15:53.235798    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-kwpl7" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:16:05 addons-101630 kubelet[1295]: I1206 09:16:05.236664    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hz4j9" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:16:15 addons-101630 kubelet[1295]: I1206 09:16:15.236211    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-lv6tv" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:16:44 addons-101630 kubelet[1295]: I1206 09:16:44.237154    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cdw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:17:01 addons-101630 kubelet[1295]: I1206 09:17:01.894796    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-qrdwx" podStartSLOduration=229.883954376 podStartE2EDuration="3m51.894739672s" podCreationTimestamp="2025-12-06 09:13:10 +0000 UTC" firstStartedPulling="2025-12-06 09:15:36.257407707 +0000 UTC m=+152.104290471" lastFinishedPulling="2025-12-06 09:15:38.268192993 +0000 UTC m=+154.115075767" observedRunningTime="2025-12-06 09:15:38.978186403 +0000 UTC m=+154.825069179" watchObservedRunningTime="2025-12-06 09:17:01.894739672 +0000 UTC m=+237.741622446"
	Dec 06 09:17:01 addons-101630 kubelet[1295]: I1206 09:17:01.959594    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/711f4444-9e74-4668-90a4-3f50df84110a-gcp-creds\") pod \"hello-world-app-5d498dc89-95r6g\" (UID: \"711f4444-9e74-4668-90a4-3f50df84110a\") " pod="default/hello-world-app-5d498dc89-95r6g"
	Dec 06 09:17:01 addons-101630 kubelet[1295]: I1206 09:17:01.959660    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52h74\" (UniqueName: \"kubernetes.io/projected/711f4444-9e74-4668-90a4-3f50df84110a-kube-api-access-52h74\") pod \"hello-world-app-5d498dc89-95r6g\" (UID: \"711f4444-9e74-4668-90a4-3f50df84110a\") " pod="default/hello-world-app-5d498dc89-95r6g"
	
	
	==> storage-provisioner [fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157] <==
	W1206 09:16:38.639790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:40.642629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:40.646740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:42.650529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:42.655480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:44.658203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:44.663062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:46.666368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:46.670287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:48.673260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:48.677958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:50.681007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:50.685719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:52.688863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:52.692698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:54.695782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:54.700537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:56.703827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:56.707889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:58.710991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:16:58.714925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:00.718502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:00.723301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:02.726724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:02.730811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-101630 -n addons-101630
helpers_test.go:269: (dbg) Run:  kubectl --context addons-101630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-101630 describe pod ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-101630 describe pod ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf: exit status 1 (58.053042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ssqfv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6zxgf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-101630 describe pod ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (240.545918ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:17:04.449174  519116 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:17:04.449503  519116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:04.449513  519116 out.go:374] Setting ErrFile to fd 2...
	I1206 09:17:04.449517  519116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:04.449714  519116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:17:04.449947  519116 mustload.go:66] Loading cluster: addons-101630
	I1206 09:17:04.450285  519116 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:17:04.450307  519116 addons.go:622] checking whether the cluster is paused
	I1206 09:17:04.450385  519116 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:17:04.450403  519116 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:17:04.450777  519116 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:17:04.468820  519116 ssh_runner.go:195] Run: systemctl --version
	I1206 09:17:04.468880  519116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:17:04.486753  519116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:17:04.579420  519116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:17:04.579585  519116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:17:04.608931  519116 cri.go:89] found id: "954acc79626b04eab60c4a9a5d092768b7731f53f0a32e1a66caa04e1bce991f"
	I1206 09:17:04.608950  519116 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:17:04.608954  519116 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:17:04.608957  519116 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:17:04.608960  519116 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:17:04.608963  519116 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:17:04.608966  519116 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:17:04.608969  519116 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:17:04.608971  519116 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:17:04.608977  519116 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:17:04.608979  519116 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:17:04.608983  519116 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:17:04.608985  519116 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:17:04.608988  519116 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:17:04.608991  519116 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:17:04.608996  519116 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:17:04.608999  519116 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:17:04.609003  519116 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:17:04.609006  519116 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:17:04.609008  519116 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:17:04.609011  519116 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:17:04.609014  519116 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:17:04.609017  519116 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:17:04.609019  519116 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:17:04.609028  519116 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:17:04.609033  519116 cri.go:89] found id: ""
	I1206 09:17:04.609078  519116 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:17:04.623701  519116 out.go:203] 
	W1206 09:17:04.624739  519116 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:17:04.624760  519116 out.go:285] * 
	* 
	W1206 09:17:04.627788  519116 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:17:04.628717  519116 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable ingress --alsologtostderr -v=1: exit status 11 (244.950463ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:17:04.690498  519178 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:17:04.690862  519178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:04.690877  519178 out.go:374] Setting ErrFile to fd 2...
	I1206 09:17:04.690884  519178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:04.691196  519178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:17:04.691561  519178 mustload.go:66] Loading cluster: addons-101630
	I1206 09:17:04.692015  519178 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:17:04.692043  519178 addons.go:622] checking whether the cluster is paused
	I1206 09:17:04.692185  519178 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:17:04.692204  519178 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:17:04.692912  519178 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:17:04.712621  519178 ssh_runner.go:195] Run: systemctl --version
	I1206 09:17:04.712698  519178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:17:04.731447  519178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:17:04.824257  519178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:17:04.824334  519178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:17:04.853692  519178 cri.go:89] found id: "954acc79626b04eab60c4a9a5d092768b7731f53f0a32e1a66caa04e1bce991f"
	I1206 09:17:04.853716  519178 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:17:04.853722  519178 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:17:04.853727  519178 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:17:04.853731  519178 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:17:04.853735  519178 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:17:04.853737  519178 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:17:04.853740  519178 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:17:04.853743  519178 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:17:04.853750  519178 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:17:04.853754  519178 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:17:04.853757  519178 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:17:04.853759  519178 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:17:04.853762  519178 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:17:04.853765  519178 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:17:04.853775  519178 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:17:04.853784  519178 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:17:04.853792  519178 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:17:04.853794  519178 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:17:04.853797  519178 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:17:04.853800  519178 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:17:04.853802  519178 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:17:04.853805  519178 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:17:04.853808  519178 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:17:04.853811  519178 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:17:04.853813  519178 cri.go:89] found id: ""
	I1206 09:17:04.853853  519178 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:17:04.869248  519178 out.go:203] 
	W1206 09:17:04.870238  519178 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:17:04.870262  519178 out.go:285] * 
	* 
	W1206 09:17:04.873302  519178 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:17:04.874358  519178 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qs5wx" [101d6192-8f9c-496b-b1e5-d9de912cf7f7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003942083s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (254.421184ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:44.594118  516015 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:44.594414  516015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:44.594426  516015 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:44.594430  516015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:44.594654  516015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:44.594924  516015 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:44.595236  516015 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:44.595256  516015 addons.go:622] checking whether the cluster is paused
	I1206 09:14:44.595333  516015 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:44.595354  516015 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:44.595767  516015 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:44.617027  516015 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:44.617084  516015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:44.634804  516015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:44.728940  516015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:44.729018  516015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:44.764005  516015 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:44.764046  516015 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:44.764054  516015 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:44.764061  516015 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:44.764067  516015 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:44.764074  516015 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:44.764078  516015 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:44.764084  516015 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:44.764090  516015 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:44.764107  516015 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:44.764117  516015 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:44.764123  516015 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:44.764129  516015 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:44.764134  516015 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:44.764140  516015 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:44.764154  516015 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:44.764164  516015 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:44.764172  516015 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:44.764177  516015 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:44.764182  516015 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:44.764187  516015 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:44.764193  516015 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:44.764203  516015 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:44.764208  516015 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:44.764214  516015 cri.go:89] found id: ""
	I1206 09:14:44.764275  516015 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:44.779273  516015 out.go:203] 
	W1206 09:14:44.780390  516015 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:44.780414  516015 out.go:285] * 
	* 
	W1206 09:14:44.783797  516015 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:44.784880  516015 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.473049ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003871615s
addons_test.go:463: (dbg) Run:  kubectl --context addons-101630 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (242.230606ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:34.083278  514230 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:34.083433  514230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:34.083443  514230 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:34.083447  514230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:34.083635  514230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:34.083878  514230 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:34.084188  514230 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:34.084209  514230 addons.go:622] checking whether the cluster is paused
	I1206 09:14:34.084289  514230 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:34.084306  514230 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:34.084739  514230 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:34.102898  514230 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:34.102953  514230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:34.121765  514230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:34.214077  514230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:34.214172  514230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:34.244147  514230 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:34.244172  514230 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:34.244177  514230 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:34.244180  514230 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:34.244185  514230 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:34.244191  514230 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:34.244196  514230 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:34.244202  514230 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:34.244207  514230 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:34.244227  514230 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:34.244243  514230 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:34.244248  514230 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:34.244253  514230 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:34.244260  514230 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:34.244263  514230 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:34.244272  514230 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:34.244280  514230 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:34.244287  514230 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:34.244292  514230 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:34.244297  514230 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:34.244304  514230 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:34.244308  514230 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:34.244313  514230 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:34.244317  514230 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:34.244330  514230 cri.go:89] found id: ""
	I1206 09:14:34.244379  514230 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:34.258956  514230 out.go:203] 
	W1206 09:14:34.260007  514230 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:34.260025  514230 out.go:285] * 
	* 
	W1206 09:14:34.263687  514230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:34.264895  514230 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 09:14:30.455055  502867 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 09:14:30.458200  502867 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 09:14:30.458224  502867 kapi.go:107] duration metric: took 3.198482ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.211203ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-101630 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-101630 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a9b7f5ea-7572-441c-b2cd-c00aac1fdd1d] Pending
helpers_test.go:352: "task-pv-pod" [a9b7f5ea-7572-441c-b2cd-c00aac1fdd1d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a9b7f5ea-7572-441c-b2cd-c00aac1fdd1d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004416322s
addons_test.go:572: (dbg) Run:  kubectl --context addons-101630 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-101630 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-101630 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-101630 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-101630 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-101630 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-101630 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [cc277a42-6eb0-494d-bf16-45d0456b9cf2] Pending
helpers_test.go:352: "task-pv-pod-restore" [cc277a42-6eb0-494d-bf16-45d0456b9cf2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [cc277a42-6eb0-494d-bf16-45d0456b9cf2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003781189s
addons_test.go:614: (dbg) Run:  kubectl --context addons-101630 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-101630 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-101630 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (252.304619ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:15:14.274925  516855 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:15:14.275280  516855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:14.275296  516855 out.go:374] Setting ErrFile to fd 2...
	I1206 09:15:14.275303  516855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:14.275609  516855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:15:14.275973  516855 mustload.go:66] Loading cluster: addons-101630
	I1206 09:15:14.276488  516855 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:15:14.276519  516855 addons.go:622] checking whether the cluster is paused
	I1206 09:15:14.276642  516855 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:15:14.276668  516855 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:15:14.277202  516855 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:15:14.296976  516855 ssh_runner.go:195] Run: systemctl --version
	I1206 09:15:14.297033  516855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:15:14.314651  516855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:15:14.408395  516855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:15:14.408503  516855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:15:14.438412  516855 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:15:14.438445  516855 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:15:14.438450  516855 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:15:14.438466  516855 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:15:14.438470  516855 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:15:14.438479  516855 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:15:14.438483  516855 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:15:14.438487  516855 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:15:14.438492  516855 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:15:14.438514  516855 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:15:14.438522  516855 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:15:14.438526  516855 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:15:14.438531  516855 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:15:14.438536  516855 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:15:14.438541  516855 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:15:14.438552  516855 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:15:14.438560  516855 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:15:14.438567  516855 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:15:14.438571  516855 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:15:14.438574  516855 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:15:14.438579  516855 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:15:14.438587  516855 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:15:14.438592  516855 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:15:14.438600  516855 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:15:14.438605  516855 cri.go:89] found id: ""
	I1206 09:15:14.438667  516855 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:15:14.454519  516855 out.go:203] 
	W1206 09:15:14.455669  516855 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:15:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:15:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:15:14.455696  516855 out.go:285] * 
	* 
	W1206 09:15:14.459229  516855 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:15:14.460309  516855 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (239.821741ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:15:14.523963  516917 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:15:14.524132  516917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:14.524145  516917 out.go:374] Setting ErrFile to fd 2...
	I1206 09:15:14.524153  516917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:14.524380  516917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:15:14.524678  516917 mustload.go:66] Loading cluster: addons-101630
	I1206 09:15:14.525025  516917 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:15:14.525053  516917 addons.go:622] checking whether the cluster is paused
	I1206 09:15:14.525159  516917 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:15:14.525189  516917 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:15:14.525645  516917 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:15:14.543500  516917 ssh_runner.go:195] Run: systemctl --version
	I1206 09:15:14.543549  516917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:15:14.560291  516917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:15:14.652327  516917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:15:14.652418  516917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:15:14.682020  516917 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:15:14.682047  516917 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:15:14.682053  516917 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:15:14.682056  516917 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:15:14.682060  516917 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:15:14.682065  516917 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:15:14.682070  516917 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:15:14.682074  516917 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:15:14.682079  516917 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:15:14.682088  516917 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:15:14.682092  516917 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:15:14.682105  516917 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:15:14.682111  516917 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:15:14.682114  516917 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:15:14.682117  516917 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:15:14.682122  516917 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:15:14.682127  516917 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:15:14.682130  516917 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:15:14.682133  516917 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:15:14.682136  516917 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:15:14.682141  516917 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:15:14.682144  516917 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:15:14.682147  516917 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:15:14.682149  516917 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:15:14.682152  516917 cri.go:89] found id: ""
	I1206 09:15:14.682189  516917 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:15:14.697112  516917 out.go:203] 
	W1206 09:15:14.698217  516917 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:15:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:15:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:15:14.698235  516917 out.go:285] * 
	* 
	W1206 09:15:14.701300  516917 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:15:14.702398  516917 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-101630 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-101630 --alsologtostderr -v=1: exit status 11 (237.848717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:22.773745  512892 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:22.773873  512892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:22.773882  512892 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:22.773886  512892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:22.774088  512892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:22.774321  512892 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:22.774684  512892 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:22.774707  512892 addons.go:622] checking whether the cluster is paused
	I1206 09:14:22.774792  512892 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:22.774810  512892 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:22.775230  512892 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:22.793057  512892 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:22.793101  512892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:22.809951  512892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:22.901915  512892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:22.901983  512892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:22.930862  512892 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:22.930889  512892 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:22.930893  512892 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:22.930896  512892 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:22.930899  512892 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:22.930906  512892 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:22.930909  512892 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:22.930912  512892 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:22.930915  512892 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:22.930929  512892 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:22.930932  512892 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:22.930935  512892 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:22.930938  512892 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:22.930941  512892 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:22.930944  512892 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:22.930955  512892 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:22.930962  512892 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:22.930966  512892 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:22.930969  512892 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:22.930972  512892 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:22.930978  512892 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:22.930981  512892 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:22.930983  512892 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:22.930985  512892 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:22.930988  512892 cri.go:89] found id: ""
	I1206 09:14:22.931033  512892 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:22.944790  512892 out.go:203] 
	W1206 09:14:22.945879  512892 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:22.945900  512892 out.go:285] * 
	* 
	W1206 09:14:22.948906  512892 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:22.950032  512892 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-101630 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-101630
helpers_test.go:243: (dbg) docker inspect addons-101630:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95",
	        "Created": "2025-12-06T09:12:51.478087231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505345,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:12:51.506945744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/hostname",
	        "HostsPath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/hosts",
	        "LogPath": "/var/lib/docker/containers/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95/6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95-json.log",
	        "Name": "/addons-101630",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-101630:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-101630",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6796bdef6f099a779ea3a16ca70c1a524a567302ff7b024d9907ec51f48aab95",
	                "LowerDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56e13216f6e4cfd65f1c4013d5539855c950bb7b703c820415e90216deee444d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-101630",
	                "Source": "/var/lib/docker/volumes/addons-101630/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-101630",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-101630",
	                "name.minikube.sigs.k8s.io": "addons-101630",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f992c1691876af5862657fbfe223814bf969fca236e2a4ad9a4022552816a151",
	            "SandboxKey": "/var/run/docker/netns/f992c1691876",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-101630": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7391d62a417b1088b41fe0868bc0021bc08c635885cb16409110efe92f7d10e1",
	                    "EndpointID": "bd713ef4fdcd52ffc6c64940a9b75b3eb8f70925f5a2a1d69e8b592980946ac4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "86:20:0b:7b:63:cd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-101630",
	                        "6796bdef6f09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-101630 -n addons-101630
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-101630 logs -n 25: (1.117993792s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-449563 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-449563   │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-449563                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-449563   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-757324 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-757324   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-757324                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-757324   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-215937 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-215937   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-215937                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-215937   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-449563                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-449563   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-757324                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-757324   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-215937                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-215937   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ --download-only -p download-docker-416316 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-416316 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ delete  │ -p download-docker-416316                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-416316 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ --download-only -p binary-mirror-279469 --alsologtostderr --binary-mirror http://127.0.0.1:43659 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-279469   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ delete  │ -p binary-mirror-279469                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-279469   │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ addons  │ disable dashboard -p addons-101630                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101630          │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ addons  │ enable dashboard -p addons-101630                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-101630          │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ start   │ -p addons-101630 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-101630          │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-101630 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-101630          │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-101630 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101630          │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-101630 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-101630          │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:12:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:12:31.412896  504702 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:12:31.413136  504702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:31.413144  504702 out.go:374] Setting ErrFile to fd 2...
	I1206 09:12:31.413148  504702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:31.413324  504702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:12:31.413876  504702 out.go:368] Setting JSON to false
	I1206 09:12:31.414760  504702 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6895,"bootTime":1765005456,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:12:31.414816  504702 start.go:143] virtualization: kvm guest
	I1206 09:12:31.416602  504702 out.go:179] * [addons-101630] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:12:31.417703  504702 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:12:31.417751  504702 notify.go:221] Checking for updates...
	I1206 09:12:31.419889  504702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:12:31.421039  504702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:12:31.422198  504702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:12:31.423326  504702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:12:31.424412  504702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:12:31.425751  504702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:12:31.449324  504702 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:12:31.449422  504702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:31.501556  504702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-06 09:12:31.492180721 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:31.501694  504702 docker.go:319] overlay module found
	I1206 09:12:31.503324  504702 out.go:179] * Using the docker driver based on user configuration
	I1206 09:12:31.504329  504702 start.go:309] selected driver: docker
	I1206 09:12:31.504341  504702 start.go:927] validating driver "docker" against <nil>
	I1206 09:12:31.504351  504702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:12:31.504933  504702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:31.557998  504702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-06 09:12:31.547832527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:31.558171  504702 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:12:31.558404  504702 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:12:31.560074  504702 out.go:179] * Using Docker driver with root privileges
	I1206 09:12:31.561159  504702 cni.go:84] Creating CNI manager for ""
	I1206 09:12:31.561227  504702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:12:31.561238  504702 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:12:31.561313  504702 start.go:353] cluster config:
	{Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 09:12:31.562599  504702 out.go:179] * Starting "addons-101630" primary control-plane node in "addons-101630" cluster
	I1206 09:12:31.563642  504702 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:12:31.564904  504702 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:12:31.566068  504702 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:31.566098  504702 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:31.566109  504702 cache.go:65] Caching tarball of preloaded images
	I1206 09:12:31.566172  504702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:12:31.566215  504702 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:12:31.566232  504702 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:12:31.566557  504702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/config.json ...
	I1206 09:12:31.566585  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/config.json: {Name:mk7ed1e2c38d36040bf6a585683d05bd81f4d33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:31.582888  504702 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:12:31.583014  504702 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 09:12:31.583031  504702 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 09:12:31.583036  504702 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 09:12:31.583043  504702 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 09:12:31.583051  504702 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 09:12:44.734872  504702 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 09:12:44.734914  504702 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:12:44.734962  504702 start.go:360] acquireMachinesLock for addons-101630: {Name:mk1e28ced48dde6057c3e722484e184aa9b7e960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:12:44.735065  504702 start.go:364] duration metric: took 80.862µs to acquireMachinesLock for "addons-101630"
	I1206 09:12:44.735088  504702 start.go:93] Provisioning new machine with config: &{Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:12:44.735166  504702 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:12:44.736766  504702 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 09:12:44.736992  504702 start.go:159] libmachine.API.Create for "addons-101630" (driver="docker")
	I1206 09:12:44.737031  504702 client.go:173] LocalClient.Create starting
	I1206 09:12:44.737171  504702 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:12:44.836295  504702 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:12:44.970292  504702 cli_runner.go:164] Run: docker network inspect addons-101630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:12:44.986888  504702 cli_runner.go:211] docker network inspect addons-101630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:12:44.986970  504702 network_create.go:284] running [docker network inspect addons-101630] to gather additional debugging logs...
	I1206 09:12:44.986989  504702 cli_runner.go:164] Run: docker network inspect addons-101630
	W1206 09:12:45.002121  504702 cli_runner.go:211] docker network inspect addons-101630 returned with exit code 1
	I1206 09:12:45.002152  504702 network_create.go:287] error running [docker network inspect addons-101630]: docker network inspect addons-101630: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-101630 not found
	I1206 09:12:45.002165  504702 network_create.go:289] output of [docker network inspect addons-101630]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-101630 not found
	
	** /stderr **
	I1206 09:12:45.002281  504702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:12:45.018948  504702 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163d350}
	I1206 09:12:45.018998  504702 network_create.go:124] attempt to create docker network addons-101630 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 09:12:45.019069  504702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-101630 addons-101630
	I1206 09:12:45.064974  504702 network_create.go:108] docker network addons-101630 192.168.49.0/24 created
	I1206 09:12:45.065004  504702 kic.go:121] calculated static IP "192.168.49.2" for the "addons-101630" container
	I1206 09:12:45.065074  504702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:12:45.081284  504702 cli_runner.go:164] Run: docker volume create addons-101630 --label name.minikube.sigs.k8s.io=addons-101630 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:12:45.099590  504702 oci.go:103] Successfully created a docker volume addons-101630
	I1206 09:12:45.099686  504702 cli_runner.go:164] Run: docker run --rm --name addons-101630-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101630 --entrypoint /usr/bin/test -v addons-101630:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:12:47.640995  504702 cli_runner.go:217] Completed: docker run --rm --name addons-101630-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101630 --entrypoint /usr/bin/test -v addons-101630:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (2.541253677s)
	I1206 09:12:47.641031  504702 oci.go:107] Successfully prepared a docker volume addons-101630
	I1206 09:12:47.641064  504702 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:47.641074  504702 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:12:47.641140  504702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-101630:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:12:51.407258  504702 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-101630:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.766062108s)
	I1206 09:12:51.407294  504702 kic.go:203] duration metric: took 3.766216145s to extract preloaded images to volume ...
	W1206 09:12:51.407395  504702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:12:51.407437  504702 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:12:51.407505  504702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:12:51.462437  504702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-101630 --name addons-101630 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101630 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-101630 --network addons-101630 --ip 192.168.49.2 --volume addons-101630:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:12:51.718783  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Running}}
	I1206 09:12:51.737651  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:12:51.755634  504702 cli_runner.go:164] Run: docker exec addons-101630 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:12:51.799606  504702 oci.go:144] the created container "addons-101630" has a running status.
	I1206 09:12:51.799642  504702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa...
	I1206 09:12:51.893128  504702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:12:51.917550  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:12:51.938728  504702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:12:51.938750  504702 kic_runner.go:114] Args: [docker exec --privileged addons-101630 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:12:51.984816  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:12:52.009096  504702 machine.go:94] provisionDockerMachine start ...
	I1206 09:12:52.009231  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.033169  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.033546  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.033574  504702 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:12:52.169514  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-101630
	
	I1206 09:12:52.169546  504702 ubuntu.go:182] provisioning hostname "addons-101630"
	I1206 09:12:52.169608  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.188919  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.189194  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.189210  504702 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-101630 && echo "addons-101630" | sudo tee /etc/hostname
	I1206 09:12:52.328676  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-101630
	
	I1206 09:12:52.328759  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.347583  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.347901  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.347932  504702 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101630/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:12:52.474936  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:12:52.474967  504702 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:12:52.475024  504702 ubuntu.go:190] setting up certificates
	I1206 09:12:52.475037  504702 provision.go:84] configureAuth start
	I1206 09:12:52.475103  504702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101630
	I1206 09:12:52.492559  504702 provision.go:143] copyHostCerts
	I1206 09:12:52.492632  504702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:12:52.492740  504702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:12:52.492803  504702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:12:52.492890  504702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.addons-101630 san=[127.0.0.1 192.168.49.2 addons-101630 localhost minikube]
	I1206 09:12:52.608060  504702 provision.go:177] copyRemoteCerts
	I1206 09:12:52.608127  504702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:12:52.608167  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.624952  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:52.717390  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:12:52.735583  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:12:52.752214  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:12:52.768659  504702 provision.go:87] duration metric: took 293.605715ms to configureAuth
	I1206 09:12:52.768686  504702 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:12:52.768874  504702 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:12:52.768990  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:52.786106  504702 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:52.786319  504702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1206 09:12:52.786337  504702 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:12:53.052424  504702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:12:53.052451  504702 machine.go:97] duration metric: took 1.043320212s to provisionDockerMachine
	I1206 09:12:53.052482  504702 client.go:176] duration metric: took 8.315442728s to LocalClient.Create
	I1206 09:12:53.052509  504702 start.go:167] duration metric: took 8.315519103s to libmachine.API.Create "addons-101630"
	I1206 09:12:53.052517  504702 start.go:293] postStartSetup for "addons-101630" (driver="docker")
	I1206 09:12:53.052527  504702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:12:53.052588  504702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:12:53.052630  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.069709  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.162795  504702 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:12:53.166097  504702 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:12:53.166127  504702 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:12:53.166139  504702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:12:53.166204  504702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:12:53.166239  504702 start.go:296] duration metric: took 113.714771ms for postStartSetup
	I1206 09:12:53.166580  504702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101630
	I1206 09:12:53.183477  504702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/config.json ...
	I1206 09:12:53.183744  504702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:12:53.183803  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.201048  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.290146  504702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:12:53.294581  504702 start.go:128] duration metric: took 8.559401152s to createHost
	I1206 09:12:53.294605  504702 start.go:83] releasing machines lock for "addons-101630", held for 8.559527188s
	I1206 09:12:53.294684  504702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101630
	I1206 09:12:53.311960  504702 ssh_runner.go:195] Run: cat /version.json
	I1206 09:12:53.312014  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.312040  504702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:12:53.312123  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:12:53.330325  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.331079  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:12:53.474551  504702 ssh_runner.go:195] Run: systemctl --version
	I1206 09:12:53.481127  504702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:12:53.516588  504702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:12:53.521234  504702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:12:53.521321  504702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:12:53.545062  504702 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:12:53.545091  504702 start.go:496] detecting cgroup driver to use...
	I1206 09:12:53.545123  504702 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:12:53.545172  504702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:12:53.561187  504702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:12:53.572672  504702 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:12:53.572722  504702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:12:53.587854  504702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:12:53.603952  504702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:12:53.684401  504702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:12:53.767529  504702 docker.go:234] disabling docker service ...
	I1206 09:12:53.767598  504702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:12:53.786175  504702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:12:53.797951  504702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:12:53.879093  504702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:12:53.958283  504702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:12:53.970090  504702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:12:53.983283  504702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:12:53.983343  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:53.993102  504702 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:12:53.993160  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.001293  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.009243  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.017332  504702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:12:54.024821  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.032774  504702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.045285  504702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:54.053442  504702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:12:54.060162  504702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:12:54.066790  504702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:12:54.142493  504702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:12:54.275826  504702 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:12:54.275916  504702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:12:54.279851  504702 start.go:564] Will wait 60s for crictl version
	I1206 09:12:54.279901  504702 ssh_runner.go:195] Run: which crictl
	I1206 09:12:54.283250  504702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:12:54.308664  504702 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:12:54.308754  504702 ssh_runner.go:195] Run: crio --version
	I1206 09:12:54.335731  504702 ssh_runner.go:195] Run: crio --version
	I1206 09:12:54.364788  504702 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:12:54.365752  504702 cli_runner.go:164] Run: docker network inspect addons-101630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:12:54.382630  504702 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 09:12:54.386836  504702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:12:54.397230  504702 kubeadm.go:884] updating cluster {Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:12:54.397381  504702 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:54.397436  504702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:12:54.430480  504702 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:12:54.430509  504702 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:12:54.430558  504702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:12:54.456213  504702 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:12:54.456238  504702 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:12:54.456247  504702 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 09:12:54.456345  504702 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-101630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:12:54.456410  504702 ssh_runner.go:195] Run: crio config
	I1206 09:12:54.501101  504702 cni.go:84] Creating CNI manager for ""
	I1206 09:12:54.501129  504702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:12:54.501152  504702 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:12:54.501174  504702 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101630 NodeName:addons-101630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:12:54.501311  504702 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-101630"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:12:54.501372  504702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:12:54.509605  504702 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:12:54.509671  504702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:12:54.517231  504702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 09:12:54.529998  504702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:12:54.544312  504702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 09:12:54.556296  504702 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:12:54.559665  504702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:12:54.569007  504702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:12:54.645931  504702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:12:54.670619  504702 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630 for IP: 192.168.49.2
	I1206 09:12:54.670639  504702 certs.go:195] generating shared ca certs ...
	I1206 09:12:54.670663  504702 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.670795  504702 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:12:54.777420  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt ...
	I1206 09:12:54.777479  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt: {Name:mk4dab107adc72fe9ab137d87913311c42622b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.777711  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key ...
	I1206 09:12:54.777731  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key: {Name:mk373e3e449365234022f0260849cb6b80917be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.777876  504702 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:12:54.814495  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt ...
	I1206 09:12:54.814523  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt: {Name:mkc75d10b349bbd61defbab7a134a0ca10cef764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.814710  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key ...
	I1206 09:12:54.814729  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key: {Name:mkdc7ddf757b3685a0de21cbad18972d9eca2094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.814836  504702 certs.go:257] generating profile certs ...
	I1206 09:12:54.814918  504702 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.key
	I1206 09:12:54.814934  504702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt with IP's: []
	I1206 09:12:54.885246  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt ...
	I1206 09:12:54.885274  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: {Name:mke3a8fc1995c7e2da3188a157968d5258718f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.885483  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.key ...
	I1206 09:12:54.885501  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.key: {Name:mk5ad30806708ae35935900aaf4453acdeb14b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.885613  504702 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e
	I1206 09:12:54.885643  504702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 09:12:54.912517  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e ...
	I1206 09:12:54.912535  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e: {Name:mk688dbdf5b8d5eb5a8b3085973f549913e01b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.912673  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e ...
	I1206 09:12:54.912692  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e: {Name:mk8aa45ee08202d8a600fdd610b95296932f1d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.912786  504702 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt.3009eb1e -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt
	I1206 09:12:54.912888  504702 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key.3009eb1e -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key
	I1206 09:12:54.912954  504702 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key
	I1206 09:12:54.912979  504702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt with IP's: []
	I1206 09:12:54.950759  504702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt ...
	I1206 09:12:54.950781  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt: {Name:mk0ddbafc232fd30924fff603ec46c9e12bac8e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.950936  504702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key ...
	I1206 09:12:54.950958  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key: {Name:mkdc8387c784ad7beba3e2538592178e38b98aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:54.951184  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:12:54.951230  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:12:54.951268  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:12:54.951299  504702 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:12:54.951913  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:12:54.970185  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:12:54.986977  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:12:55.003430  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:12:55.019916  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:12:55.036098  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:12:55.052242  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:12:55.068424  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:12:55.085700  504702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:12:55.104506  504702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:12:55.116431  504702 ssh_runner.go:195] Run: openssl version
	I1206 09:12:55.122500  504702 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.129436  504702 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:12:55.138771  504702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.142306  504702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.142351  504702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:55.176946  504702 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:12:55.184844  504702 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:12:55.192387  504702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:12:55.195921  504702 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:12:55.195966  504702 kubeadm.go:401] StartCluster: {Name:addons-101630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-101630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:12:55.196039  504702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:12:55.196087  504702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:12:55.224224  504702 cri.go:89] found id: ""
	I1206 09:12:55.224308  504702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:12:55.232407  504702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:12:55.240030  504702 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:12:55.240075  504702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:12:55.247377  504702 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:12:55.247391  504702 kubeadm.go:158] found existing configuration files:
	
	I1206 09:12:55.247439  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:12:55.254725  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:12:55.254776  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:12:55.261873  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:12:55.269053  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:12:55.269105  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:12:55.276035  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:12:55.283210  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:12:55.283265  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:12:55.290280  504702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:12:55.297367  504702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:12:55.297417  504702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:12:55.304189  504702 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:12:55.340887  504702 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:12:55.340964  504702 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:12:55.372674  504702 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:12:55.372760  504702 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:12:55.372821  504702 kubeadm.go:319] OS: Linux
	I1206 09:12:55.372890  504702 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:12:55.372963  504702 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:12:55.373036  504702 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:12:55.373086  504702 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:12:55.373124  504702 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:12:55.373183  504702 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:12:55.373232  504702 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:12:55.373306  504702 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:12:55.431857  504702 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:12:55.432002  504702 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:12:55.432117  504702 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:12:55.438736  504702 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:12:55.440567  504702 out.go:252]   - Generating certificates and keys ...
	I1206 09:12:55.440672  504702 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:12:55.440725  504702 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:12:55.890546  504702 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:12:56.213485  504702 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:12:56.826407  504702 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:12:56.977161  504702 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:12:57.344352  504702 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:12:57.344519  504702 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-101630 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:12:57.517791  504702 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:12:57.517956  504702 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-101630 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:12:57.610101  504702 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:12:57.788348  504702 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:12:58.119946  504702 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:12:58.120032  504702 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:12:58.529052  504702 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:12:58.595001  504702 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:12:58.782981  504702 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:12:58.853000  504702 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:12:58.948208  504702 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:12:58.948661  504702 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:12:58.952171  504702 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:12:58.953353  504702 out.go:252]   - Booting up control plane ...
	I1206 09:12:58.953481  504702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:12:58.953582  504702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:12:58.954134  504702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:12:58.981815  504702 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:12:58.981942  504702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:12:58.988145  504702 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:12:58.988352  504702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:12:58.988402  504702 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:12:59.083725  504702 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:12:59.083873  504702 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:12:59.584669  504702 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.263368ms
	I1206 09:12:59.588590  504702 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:12:59.588725  504702 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 09:12:59.588855  504702 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:12:59.588920  504702 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:13:01.368519  504702 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.779846247s
	I1206 09:13:01.832111  504702 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.243451665s
	I1206 09:13:03.589987  504702 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001382803s
	I1206 09:13:03.605042  504702 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:13:03.614518  504702 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:13:03.622221  504702 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:13:03.622538  504702 kubeadm.go:319] [mark-control-plane] Marking the node addons-101630 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:13:03.629956  504702 kubeadm.go:319] [bootstrap-token] Using token: umpzxk.mrukc1mpg3pqm1t5
	I1206 09:13:03.631078  504702 out.go:252]   - Configuring RBAC rules ...
	I1206 09:13:03.631220  504702 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:13:03.635626  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:13:03.640299  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:13:03.642599  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:13:03.644711  504702 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:13:03.647520  504702 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:13:03.995904  504702 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:13:04.411229  504702 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:13:04.995765  504702 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:13:04.996613  504702 kubeadm.go:319] 
	I1206 09:13:04.996683  504702 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:13:04.996698  504702 kubeadm.go:319] 
	I1206 09:13:04.996770  504702 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:13:04.996775  504702 kubeadm.go:319] 
	I1206 09:13:04.996796  504702 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:13:04.996847  504702 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:13:04.996919  504702 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:13:04.996938  504702 kubeadm.go:319] 
	I1206 09:13:04.997022  504702 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:13:04.997033  504702 kubeadm.go:319] 
	I1206 09:13:04.997125  504702 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:13:04.997143  504702 kubeadm.go:319] 
	I1206 09:13:04.997198  504702 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:13:04.997282  504702 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:13:04.997350  504702 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:13:04.997359  504702 kubeadm.go:319] 
	I1206 09:13:04.997429  504702 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:13:04.997538  504702 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:13:04.997547  504702 kubeadm.go:319] 
	I1206 09:13:04.997612  504702 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token umpzxk.mrukc1mpg3pqm1t5 \
	I1206 09:13:04.997698  504702 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:13:04.997716  504702 kubeadm.go:319] 	--control-plane 
	I1206 09:13:04.997720  504702 kubeadm.go:319] 
	I1206 09:13:04.997796  504702 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:13:04.997803  504702 kubeadm.go:319] 
	I1206 09:13:04.997904  504702 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token umpzxk.mrukc1mpg3pqm1t5 \
	I1206 09:13:04.998027  504702 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:13:05.000294  504702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:13:05.000421  504702 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:13:05.000443  504702 cni.go:84] Creating CNI manager for ""
	I1206 09:13:05.000452  504702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:13:05.001836  504702 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:13:05.002801  504702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:13:05.007028  504702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:13:05.007049  504702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:13:05.019904  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:13:05.225835  504702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:13:05.225961  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:05.225961  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-101630 minikube.k8s.io/updated_at=2025_12_06T09_13_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-101630 minikube.k8s.io/primary=true
	I1206 09:13:05.236182  504702 ops.go:34] apiserver oom_adj: -16
	I1206 09:13:05.320632  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:05.820993  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:06.321339  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:06.820777  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:07.321320  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:07.820998  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:08.320784  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:08.821366  504702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:13:08.885728  504702 kubeadm.go:1114] duration metric: took 3.659838522s to wait for elevateKubeSystemPrivileges
	I1206 09:13:08.885763  504702 kubeadm.go:403] duration metric: took 13.689800256s to StartCluster
	I1206 09:13:08.885780  504702 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:13:08.885882  504702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:13:08.886376  504702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:13:08.886604  504702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:13:08.886626  504702 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:13:08.886696  504702 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:13:08.886817  504702 addons.go:70] Setting default-storageclass=true in profile "addons-101630"
	I1206 09:13:08.886835  504702 addons.go:70] Setting yakd=true in profile "addons-101630"
	I1206 09:13:08.886857  504702 addons.go:239] Setting addon yakd=true in "addons-101630"
	I1206 09:13:08.886859  504702 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-101630"
	I1206 09:13:08.886877  504702 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:13:08.886891  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.886868  504702 addons.go:70] Setting cloud-spanner=true in profile "addons-101630"
	I1206 09:13:08.886914  504702 addons.go:239] Setting addon cloud-spanner=true in "addons-101630"
	I1206 09:13:08.886882  504702 addons.go:70] Setting metrics-server=true in profile "addons-101630"
	I1206 09:13:08.886930  504702 addons.go:70] Setting gcp-auth=true in profile "addons-101630"
	I1206 09:13:08.886937  504702 addons.go:70] Setting ingress-dns=true in profile "addons-101630"
	I1206 09:13:08.886948  504702 addons.go:239] Setting addon ingress-dns=true in "addons-101630"
	I1206 09:13:08.886954  504702 mustload.go:66] Loading cluster: addons-101630
	I1206 09:13:08.886957  504702 addons.go:239] Setting addon metrics-server=true in "addons-101630"
	I1206 09:13:08.886975  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.886975  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887008  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887099  504702 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-101630"
	I1206 09:13:08.887131  504702 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-101630"
	I1206 09:13:08.887169  504702 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-101630"
	I1206 09:13:08.887172  504702 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:13:08.887196  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887204  504702 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-101630"
	I1206 09:13:08.887240  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887324  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887440  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887517  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887534  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887548  504702 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-101630"
	I1206 09:13:08.887554  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887562  504702 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-101630"
	I1206 09:13:08.887585  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.887721  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887764  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.887534  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.888064  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.890103  504702 out.go:179] * Verifying Kubernetes components...
	I1206 09:13:08.888078  504702 addons.go:70] Setting registry=true in profile "addons-101630"
	I1206 09:13:08.890335  504702 addons.go:239] Setting addon registry=true in "addons-101630"
	I1206 09:13:08.890368  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.890962  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.888115  504702 addons.go:70] Setting registry-creds=true in profile "addons-101630"
	I1206 09:13:08.891197  504702 addons.go:239] Setting addon registry-creds=true in "addons-101630"
	I1206 09:13:08.891224  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.888123  504702 addons.go:70] Setting storage-provisioner=true in profile "addons-101630"
	I1206 09:13:08.891908  504702 addons.go:239] Setting addon storage-provisioner=true in "addons-101630"
	I1206 09:13:08.891944  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.892614  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.888132  504702 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-101630"
	I1206 09:13:08.895245  504702 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-101630"
	I1206 09:13:08.888139  504702 addons.go:70] Setting volcano=true in profile "addons-101630"
	I1206 09:13:08.895433  504702 addons.go:239] Setting addon volcano=true in "addons-101630"
	I1206 09:13:08.888150  504702 addons.go:70] Setting inspektor-gadget=true in profile "addons-101630"
	I1206 09:13:08.895650  504702 addons.go:239] Setting addon inspektor-gadget=true in "addons-101630"
	I1206 09:13:08.895681  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.888154  504702 addons.go:70] Setting volumesnapshots=true in profile "addons-101630"
	I1206 09:13:08.896018  504702 addons.go:239] Setting addon volumesnapshots=true in "addons-101630"
	I1206 09:13:08.896045  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.886922  504702 addons.go:70] Setting ingress=true in profile "addons-101630"
	I1206 09:13:08.896314  504702 addons.go:239] Setting addon ingress=true in "addons-101630"
	I1206 09:13:08.896367  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.905241  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.905767  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.905825  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.906184  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.907547  504702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:13:08.907707  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.908311  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.913581  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.953432  504702 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:13:08.953447  504702 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:13:08.953432  504702 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:13:08.955153  504702 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:13:08.955177  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:13:08.955271  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.955639  504702 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:13:08.955783  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:13:08.955811  504702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:13:08.955933  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.957538  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:13:08.957712  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:13:08.957734  504702 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:13:08.957794  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.958339  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.958614  504702 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:13:08.958963  504702 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:13:08.958978  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:13:08.959033  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.961341  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:13:08.961397  504702 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:13:08.962406  504702 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:13:08.962426  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:13:08.962523  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:13:08.962590  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.964598  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:13:08.965866  504702 addons.go:239] Setting addon default-storageclass=true in "addons-101630"
	I1206 09:13:08.965910  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.966151  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:13:08.966431  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.968440  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:13:08.968481  504702 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:13:08.969670  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:13:08.969702  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:13:08.969721  504702 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:13:08.969738  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:13:08.970788  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.971160  504702 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:13:08.971764  504702 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-101630"
	I1206 09:13:08.971809  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:08.972163  504702 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:13:08.972276  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:08.972292  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:13:08.972346  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.972703  504702 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:13:08.972717  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:13:08.972762  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.973571  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:13:08.974591  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:13:08.975485  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:13:08.975528  504702 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:13:08.976306  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:13:08.976339  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:13:08.976413  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.976799  504702 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:13:08.976813  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:13:08.976866  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.989091  504702 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:13:08.989173  504702 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:13:08.990589  504702 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:13:08.990608  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:13:08.990669  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:08.990851  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:13:08.990863  504702 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:13:08.990920  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:09.008363  504702 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:13:09.012007  504702 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:13:09.012030  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:13:09.012100  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:09.022266  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.028061  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.031926  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.039729  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.041773  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	W1206 09:13:09.042286  504702 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 09:13:09.050891  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.055005  504702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:13:09.064960  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.069168  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.074621  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.077213  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.081285  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.084768  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.084867  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.084953  504702 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:13:09.085000  504702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:13:09.085061  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:09.086267  504702 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:13:09.087343  504702 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:13:09.088670  504702 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:13:09.088724  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:13:09.088788  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	W1206 09:13:09.095184  504702 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:13:09.095221  504702 retry.go:31] will retry after 188.294333ms: ssh: handshake failed: EOF
	I1206 09:13:09.118496  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	W1206 09:13:09.119603  504702 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:13:09.119844  504702 retry.go:31] will retry after 207.750917ms: ssh: handshake failed: EOF
	I1206 09:13:09.126277  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:09.135920  504702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:13:09.185061  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:13:09.185083  504702 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:13:09.200572  504702 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:13:09.200742  504702 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:13:09.201157  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:13:09.205022  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:13:09.205044  504702 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:13:09.214538  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:13:09.218369  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:13:09.218393  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:13:09.226049  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:13:09.226072  504702 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:13:09.230831  504702 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:13:09.230851  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:13:09.237965  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:13:09.242861  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:13:09.246358  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:13:09.246380  504702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:13:09.252272  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:13:09.252295  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:13:09.252853  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:13:09.257150  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:13:09.258035  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:13:09.265195  504702 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:13:09.265216  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:13:09.270734  504702 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:13:09.270761  504702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:13:09.276664  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:13:09.280635  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:13:09.291641  504702 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:13:09.291672  504702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:13:09.315741  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:13:09.322600  504702 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:13:09.322632  504702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:13:09.326831  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:13:09.326920  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:13:09.357107  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:13:09.376958  504702 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:13:09.376989  504702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:13:09.401692  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:13:09.401736  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:13:09.432256  504702 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 09:13:09.435211  504702 node_ready.go:35] waiting up to 6m0s for node "addons-101630" to be "Ready" ...
	I1206 09:13:09.437274  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:13:09.437298  504702 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:13:09.459050  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:13:09.459083  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:13:09.468978  504702 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:13:09.469006  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:13:09.537370  504702 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:13:09.537403  504702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:13:09.540717  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:13:09.540782  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:13:09.548074  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:13:09.630806  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:13:09.630836  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:13:09.703914  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:13:09.703948  504702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:13:09.747159  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:13:09.747186  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:13:09.804407  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:13:09.804434  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:13:09.827919  504702 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:13:09.828019  504702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:13:09.897753  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:13:09.941465  504702 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-101630" context rescaled to 1 replicas
	I1206 09:13:10.444119  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.229519742s)
	I1206 09:13:10.444161  504702 addons.go:495] Verifying addon ingress=true in "addons-101630"
	I1206 09:13:10.444284  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.190467153s)
	I1206 09:13:10.444342  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18717862s)
	I1206 09:13:10.444369  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.186282479s)
	I1206 09:13:10.444394  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.167708126s)
	I1206 09:13:10.444406  504702 addons.go:495] Verifying addon registry=true in "addons-101630"
	I1206 09:13:10.444544  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.163878s)
	I1206 09:13:10.444653  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.128878851s)
	I1206 09:13:10.444731  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.087510642s)
	I1206 09:13:10.444267  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.201386254s)
	I1206 09:13:10.444231  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.206230844s)
	I1206 09:13:10.445796  504702 addons.go:495] Verifying addon metrics-server=true in "addons-101630"
	I1206 09:13:10.446569  504702 out.go:179] * Verifying registry addon...
	I1206 09:13:10.446639  504702 out.go:179] * Verifying ingress addon...
	I1206 09:13:10.447438  504702 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-101630 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:13:10.449048  504702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:13:10.449610  504702 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:13:10.451927  504702 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:13:10.452060  504702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:13:10.452165  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1206 09:13:10.454506  504702 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 09:13:10.936404  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.388289386s)
	W1206 09:13:10.936451  504702 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:13:10.936555  504702 retry.go:31] will retry after 207.959161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:13:10.936683  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.038812463s)
	I1206 09:13:10.936723  504702 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-101630"
	I1206 09:13:10.938312  504702 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:13:10.940123  504702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:13:10.942883  504702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:13:10.942900  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:10.952179  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:10.952429  504702 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:13:10.952447  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:11.145432  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1206 09:13:11.438643  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:11.443335  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:11.451523  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:11.452750  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:11.944192  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:11.952344  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:11.952418  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:12.443926  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:12.452247  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:12.452380  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:12.943244  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:12.952818  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:12.952848  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:13.443088  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:13.452072  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:13.452250  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:13.636601  504702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.491123936s)
	W1206 09:13:13.938769  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:13.943507  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:13.951812  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:13.951854  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:14.443587  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:14.452042  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:14.452308  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:14.943255  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:14.951173  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:14.952392  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:15.443106  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:15.452555  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:15.452622  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1206 09:13:15.939561  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:15.943187  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:15.951196  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:15.952440  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:16.443789  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:16.451849  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:16.452086  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:16.575546  504702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:13:16.575611  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:16.593684  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:16.693305  504702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:13:16.706003  504702 addons.go:239] Setting addon gcp-auth=true in "addons-101630"
	I1206 09:13:16.706052  504702 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:13:16.706391  504702 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:13:16.724882  504702 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:13:16.724942  504702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:13:16.742677  504702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:13:16.834813  504702 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:13:16.836322  504702 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:13:16.837598  504702 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:13:16.837616  504702 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:13:16.852035  504702 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:13:16.852061  504702 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:13:16.865112  504702 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:13:16.865134  504702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:13:16.878219  504702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:13:16.943078  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:16.952602  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:16.952810  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:17.186797  504702 addons.go:495] Verifying addon gcp-auth=true in "addons-101630"
	I1206 09:13:17.187963  504702 out.go:179] * Verifying gcp-auth addon...
	I1206 09:13:17.189865  504702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:13:17.196721  504702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:13:17.196743  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:17.443298  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:17.451271  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:17.452630  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:17.693690  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:17.942581  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:17.951656  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:17.951827  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:18.193268  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 09:13:18.438422  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:18.442654  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:18.451787  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:18.452047  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:18.692890  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:18.943299  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:18.951347  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:18.952564  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:19.193679  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:19.443195  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:19.451065  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:19.452363  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:19.693350  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:19.943309  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:19.951360  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:19.952599  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.193857  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 09:13:20.438926  504702 node_ready.go:57] node "addons-101630" has "Ready":"False" status (will retry)
	I1206 09:13:20.443751  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:20.451996  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:20.452096  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.693136  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:20.943836  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:20.952405  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.952409  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.193445  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:21.437822  504702 node_ready.go:49] node "addons-101630" is "Ready"
	I1206 09:13:21.437850  504702 node_ready.go:38] duration metric: took 12.002606923s for node "addons-101630" to be "Ready" ...
	I1206 09:13:21.437866  504702 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:13:21.437914  504702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:13:21.442623  504702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:13:21.442652  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:21.451450  504702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:13:21.451498  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.451811  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:21.455397  504702 api_server.go:72] duration metric: took 12.568738114s to wait for apiserver process to appear ...
	I1206 09:13:21.455422  504702 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:13:21.455448  504702 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 09:13:21.459714  504702 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 09:13:21.460502  504702 api_server.go:141] control plane version: v1.34.2
	I1206 09:13:21.460526  504702 api_server.go:131] duration metric: took 5.096611ms to wait for apiserver health ...
	I1206 09:13:21.460536  504702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:13:21.466052  504702 system_pods.go:59] 20 kube-system pods found
	I1206 09:13:21.466087  504702 system_pods.go:61] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending
	I1206 09:13:21.466100  504702 system_pods.go:61] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:21.466107  504702 system_pods.go:61] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending
	I1206 09:13:21.466115  504702 system_pods.go:61] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending
	I1206 09:13:21.466125  504702 system_pods.go:61] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending
	I1206 09:13:21.466130  504702 system_pods.go:61] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:21.466140  504702 system_pods.go:61] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:21.466145  504702 system_pods.go:61] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:21.466151  504702 system_pods.go:61] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:21.466159  504702 system_pods.go:61] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending
	I1206 09:13:21.466165  504702 system_pods.go:61] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:21.466172  504702 system_pods.go:61] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:21.466180  504702 system_pods.go:61] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:21.466190  504702 system_pods.go:61] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending
	I1206 09:13:21.466199  504702 system_pods.go:61] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:21.466207  504702 system_pods.go:61] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:21.466216  504702 system_pods.go:61] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending
	I1206 09:13:21.466225  504702 system_pods.go:61] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending
	I1206 09:13:21.466236  504702 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.466247  504702 system_pods.go:61] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:21.466259  504702 system_pods.go:74] duration metric: took 5.714323ms to wait for pod list to return data ...
	I1206 09:13:21.466272  504702 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:13:21.468290  504702 default_sa.go:45] found service account: "default"
	I1206 09:13:21.468313  504702 default_sa.go:55] duration metric: took 2.031273ms for default service account to be created ...
	I1206 09:13:21.468323  504702 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:13:21.471683  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:21.471712  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending
	I1206 09:13:21.471724  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:21.471730  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending
	I1206 09:13:21.471742  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending
	I1206 09:13:21.471748  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending
	I1206 09:13:21.471755  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:21.471762  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:21.471774  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:21.471780  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:21.471787  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending
	I1206 09:13:21.471797  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:21.471803  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:21.471811  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:21.471816  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending
	I1206 09:13:21.471826  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:21.471836  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:21.471843  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending
	I1206 09:13:21.471848  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending
	I1206 09:13:21.471857  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.471864  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:21.471884  504702 retry.go:31] will retry after 259.820638ms: missing components: kube-dns
	I1206 09:13:21.694200  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:21.800817  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:21.800862  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:21.800874  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:21.800886  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:13:21.800894  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:13:21.800903  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:13:21.800909  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:21.800915  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:21.800922  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:21.800928  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:21.800937  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:21.800943  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:21.800950  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:21.800957  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:21.800965  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:21.800972  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:21.800979  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:21.800986  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:21.800997  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.801007  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:21.801016  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:21.801039  504702 retry.go:31] will retry after 247.83868ms: missing components: kube-dns
	I1206 09:13:21.944568  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:21.951919  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.953182  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:22.053802  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:22.053841  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:22.053852  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:22.053862  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:13:22.053872  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:13:22.053899  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:13:22.053913  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:22.053922  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:22.053931  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:22.053939  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:22.053952  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:22.053960  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:22.053966  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:22.053975  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:22.053984  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:22.053994  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:22.054008  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:22.054021  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:22.054039  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.054051  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.054059  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:22.054086  504702 retry.go:31] will retry after 330.491691ms: missing components: kube-dns
	I1206 09:13:22.194651  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:22.391494  504702 system_pods.go:86] 20 kube-system pods found
	I1206 09:13:22.391533  504702 system_pods.go:89] "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:22.391539  504702 system_pods.go:89] "coredns-66bc5c9577-kwpl7" [37a21001-ad3b-43f0-bcf2-5d4893cac5ba] Running
	I1206 09:13:22.391548  504702 system_pods.go:89] "csi-hostpath-attacher-0" [168e2458-02cb-4052-b9c1-7e4bf0307eb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:13:22.391554  504702 system_pods.go:89] "csi-hostpath-resizer-0" [c142c539-86b0-4b21-af07-d1c86aaf0201] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:13:22.391564  504702 system_pods.go:89] "csi-hostpathplugin-d4rl2" [5639be4c-8f6a-4f7d-b7f5-cc7297de01d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:13:22.391571  504702 system_pods.go:89] "etcd-addons-101630" [e4d41be0-dbb2-49b6-9bdf-f7db94132cac] Running
	I1206 09:13:22.391577  504702 system_pods.go:89] "kindnet-j6wfg" [2f7fe392-9381-468b-affd-aafd45327482] Running
	I1206 09:13:22.391583  504702 system_pods.go:89] "kube-apiserver-addons-101630" [ba041201-9345-409a-95d2-aecbc97c1afb] Running
	I1206 09:13:22.391593  504702 system_pods.go:89] "kube-controller-manager-addons-101630" [1367085c-5dcf-4f26-8fe0-365215dc6c68] Running
	I1206 09:13:22.391603  504702 system_pods.go:89] "kube-ingress-dns-minikube" [b8a53688-c70a-4ee8-92ed-1fbeac868dbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:22.391612  504702 system_pods.go:89] "kube-proxy-tnjbc" [30c2ac5c-287b-4341-ba78-8fcebc86ff32] Running
	I1206 09:13:22.391618  504702 system_pods.go:89] "kube-scheduler-addons-101630" [1f0146b1-0fac-4f7a-958b-c63574aeae2d] Running
	I1206 09:13:22.391629  504702 system_pods.go:89] "metrics-server-85b7d694d7-gj9kl" [68ebcb0f-2296-4f3e-ab8b-439bbecea883] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:22.391634  504702 system_pods.go:89] "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:22.391642  504702 system_pods.go:89] "registry-6b586f9694-qh5nl" [988a8793-90b6-420a-884f-25c4adf43e94] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:22.391651  504702 system_pods.go:89] "registry-creds-764b6fb674-qrdwx" [3bf9406e-6469-4c0a-b3d1-35797ae72deb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:22.391660  504702 system_pods.go:89] "registry-proxy-cdw5g" [b83c8815-e09b-4bad-951d-5acdd08951e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:22.391675  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-99cb8" [bb336f82-f3f8-4cd3-acdf-b43f3f1af831] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.391683  504702 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rc4h2" [e4088ee8-3b56-438d-9cc9-181cdc625dea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:22.391687  504702 system_pods.go:89] "storage-provisioner" [98f6e660-6d3b-4052-a2b1-6b2ac23f150c] Running
	I1206 09:13:22.391696  504702 system_pods.go:126] duration metric: took 923.366501ms to wait for k8s-apps to be running ...
	I1206 09:13:22.391707  504702 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:13:22.391751  504702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:13:22.410633  504702 system_svc.go:56] duration metric: took 18.91386ms WaitForService to wait for kubelet
	I1206 09:13:22.410669  504702 kubeadm.go:587] duration metric: took 13.524014508s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:13:22.410694  504702 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:13:22.413826  504702 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:13:22.413856  504702 node_conditions.go:123] node cpu capacity is 8
	I1206 09:13:22.413877  504702 node_conditions.go:105] duration metric: took 3.176509ms to run NodePressure ...
	I1206 09:13:22.413892  504702 start.go:242] waiting for startup goroutines ...
	I1206 09:13:22.490934  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:22.491011  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:22.491241  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:22.693513  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:22.943545  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:22.951721  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:22.952847  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:23.193869  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:23.444310  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:23.451432  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:23.452590  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:23.693318  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:23.943956  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:23.952122  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:23.952414  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:24.193358  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:24.443830  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:24.453848  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:24.453916  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:24.693801  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:24.944196  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:24.952918  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:24.952977  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:25.193729  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:25.444435  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:25.451673  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:25.452725  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:25.694323  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:25.944418  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:25.952083  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:25.952890  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:26.193088  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:26.444708  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:26.452126  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:26.452323  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:26.692830  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:26.943932  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:26.952310  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:26.952428  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:27.193153  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:27.443599  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:27.452450  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:27.452894  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:27.693229  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:27.943625  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:27.951858  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:27.951956  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:28.192919  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:28.444371  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:28.451561  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:28.452594  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:28.693399  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:28.943699  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:28.951958  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:28.952019  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.194549  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:29.444165  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:29.452859  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.452951  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:29.693123  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:29.956938  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:29.956961  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.957073  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.193395  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:30.444467  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.452174  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:30.452805  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:30.694118  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:30.944064  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.952535  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:30.952553  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:31.193442  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:31.444209  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:31.452723  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:31.452901  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:31.692762  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:31.944793  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:31.952500  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:31.952503  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:32.193531  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:32.443670  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:32.452197  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:32.452947  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:32.692581  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:32.943699  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:32.951828  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:32.952009  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:33.192632  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:33.444049  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:33.452570  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:33.452720  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:33.693626  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:33.944239  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:33.951313  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:33.952346  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:34.193853  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:34.444660  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:34.452200  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:34.452221  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:34.693834  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:34.945096  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:34.952995  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:34.953040  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:35.194994  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:35.444504  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:35.452417  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:35.453131  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:35.693632  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:35.944119  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:35.952814  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:35.952938  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:36.193943  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:36.444351  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:36.451756  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:36.452718  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:36.693353  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:36.943453  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:36.952189  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:36.953051  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:37.192748  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:37.444103  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:37.454666  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:37.455279  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:37.694338  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:37.945361  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:37.952626  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:37.952828  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:38.193939  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:38.444681  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:38.452705  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:38.453051  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:38.694017  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:38.944951  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:38.952744  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:38.953296  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:39.193760  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:39.443504  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:39.452375  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:39.452868  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:39.694196  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:39.947405  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:39.951972  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:39.953140  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:40.193549  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:40.444253  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:40.453324  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:40.453352  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:40.694087  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:40.944930  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:40.953342  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:40.953352  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:41.193703  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:41.444442  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:41.452300  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:41.453136  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:41.693662  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:41.944517  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:41.952315  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:41.952949  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.193971  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:42.446257  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:42.452816  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:42.452924  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.694671  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:42.943740  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:42.951835  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.951902  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:43.193600  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:43.443393  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:43.452925  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:43.452948  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:43.693629  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:43.943425  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:43.951355  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:43.952378  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.193274  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:44.443010  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:44.452292  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:44.452316  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.692989  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:44.944527  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:44.951855  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.951878  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:45.192852  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:45.444398  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:45.452910  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:45.452909  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:45.692671  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:45.943707  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:45.952998  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:45.953025  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:46.194743  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:46.444153  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:46.452613  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:46.452633  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:46.693550  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:46.943416  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:46.951563  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:46.952674  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:47.194639  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:47.443622  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:47.451945  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:47.452974  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:47.692809  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:47.944249  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.044528  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:48.044796  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:48.193409  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:48.443362  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.451923  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:48.452723  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:48.693924  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:48.944692  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.952151  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:48.952190  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:49.193730  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:49.444099  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:49.452332  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:49.452394  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:49.693469  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:49.944065  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:49.952605  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:49.952804  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:50.194754  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:50.444374  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:50.452917  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:50.452976  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:50.693257  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:50.944132  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:50.952939  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:50.953086  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:51.193262  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:51.443754  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:51.452322  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:51.452577  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:51.694044  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:51.944116  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:51.952829  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:51.952908  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:52.193389  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:52.443694  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:52.452185  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:52.452219  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:52.693602  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:52.943630  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:52.951787  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:52.951819  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:53.193053  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:53.444646  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:53.451765  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:53.451824  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:53.693439  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:53.944046  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:53.952730  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:53.952773  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:54.193861  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:54.444577  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:54.452273  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:54.453229  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:54.693294  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:54.943396  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:54.951493  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:54.952604  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:55.193907  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:55.444620  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:55.452337  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:55.453012  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:55.693102  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:55.943528  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:55.952047  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:55.952065  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:56.193398  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:56.443600  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:56.451884  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:56.452783  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:56.693435  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:56.945086  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:56.953244  504702 kapi.go:107] duration metric: took 46.504193263s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:13:56.953511  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:57.193109  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:57.444105  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:57.452648  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:57.694598  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:57.944945  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:57.952321  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:58.194177  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:58.444723  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:58.453420  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:58.693418  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:58.944095  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:58.952378  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:59.197362  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:59.444037  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:59.452811  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:59.695038  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:59.944744  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:59.953575  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:00.194789  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:00.444297  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:00.453132  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:00.693641  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:00.944607  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:00.953617  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:01.193306  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:01.444264  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:01.453269  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:01.693907  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:01.943920  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:01.952987  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:02.194832  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:02.444376  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:02.452357  504702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:02.694272  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:02.943887  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:02.952762  504702 kapi.go:107] duration metric: took 52.503147826s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:14:03.194141  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:03.443269  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:03.786520  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:03.943756  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:04.194017  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:04.444253  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:04.692855  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:04.944490  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:05.193557  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:05.443887  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:05.693836  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:05.944957  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:06.246780  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:06.444070  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:06.693115  504702 kapi.go:107] duration metric: took 49.503243952s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:14:06.694645  504702 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-101630 cluster.
	I1206 09:14:06.695751  504702 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:14:06.696826  504702 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:14:06.944403  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:07.443978  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:07.943240  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:08.444492  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:08.944380  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:09.443836  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:09.943626  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:10.443848  504702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:10.944340  504702 kapi.go:107] duration metric: took 1m0.004213562s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:14:10.946040  504702 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, registry-creds, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 09:14:10.947017  504702 addons.go:530] duration metric: took 1m2.060322172s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner registry-creds inspektor-gadget cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 09:14:10.947060  504702 start.go:247] waiting for cluster config update ...
	I1206 09:14:10.947088  504702 start.go:256] writing updated cluster config ...
	I1206 09:14:10.947371  504702 ssh_runner.go:195] Run: rm -f paused
	I1206 09:14:10.951517  504702 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:14:10.954850  504702 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kwpl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.959318  504702 pod_ready.go:94] pod "coredns-66bc5c9577-kwpl7" is "Ready"
	I1206 09:14:10.959345  504702 pod_ready.go:86] duration metric: took 4.474513ms for pod "coredns-66bc5c9577-kwpl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.961193  504702 pod_ready.go:83] waiting for pod "etcd-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.964946  504702 pod_ready.go:94] pod "etcd-addons-101630" is "Ready"
	I1206 09:14:10.964968  504702 pod_ready.go:86] duration metric: took 3.754535ms for pod "etcd-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.966776  504702 pod_ready.go:83] waiting for pod "kube-apiserver-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.970154  504702 pod_ready.go:94] pod "kube-apiserver-addons-101630" is "Ready"
	I1206 09:14:10.970172  504702 pod_ready.go:86] duration metric: took 3.377753ms for pod "kube-apiserver-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:10.971944  504702 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:11.356077  504702 pod_ready.go:94] pod "kube-controller-manager-addons-101630" is "Ready"
	I1206 09:14:11.356105  504702 pod_ready.go:86] duration metric: took 384.143807ms for pod "kube-controller-manager-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:11.556507  504702 pod_ready.go:83] waiting for pod "kube-proxy-tnjbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:11.956232  504702 pod_ready.go:94] pod "kube-proxy-tnjbc" is "Ready"
	I1206 09:14:11.956263  504702 pod_ready.go:86] duration metric: took 399.722574ms for pod "kube-proxy-tnjbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:12.155510  504702 pod_ready.go:83] waiting for pod "kube-scheduler-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:12.554922  504702 pod_ready.go:94] pod "kube-scheduler-addons-101630" is "Ready"
	I1206 09:14:12.554955  504702 pod_ready.go:86] duration metric: took 399.414409ms for pod "kube-scheduler-addons-101630" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:12.554971  504702 pod_ready.go:40] duration metric: took 1.603415142s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:14:12.609065  504702 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:14:12.610568  504702 out.go:179] * Done! kubectl is now configured to use "addons-101630" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:14:09 addons-101630 crio[770]: time="2025-12-06T09:14:09.780169811Z" level=info msg="Starting container: 48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469" id=cd3f5842-1b1a-41d0-ac1f-bbbfaa824972 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:14:09 addons-101630 crio[770]: time="2025-12-06T09:14:09.783138764Z" level=info msg="Started container" PID=6140 containerID=48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469 description=kube-system/csi-hostpathplugin-d4rl2/csi-snapshotter id=cd3f5842-1b1a-41d0-ac1f-bbbfaa824972 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd7f7be94c3dfabf4a75f369894ec4dca2fbc5bceb465c8b6df55c9e6a7a9310
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.477816926Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4aaa3593-0e85-4bf5-87c2-6ec0bcf1e1c3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.477882928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.483874344Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d0b2980eae5edc737807b66a28754d2e8bb670600fab38a77772622c671e39f2 UID:acbd545b-4ccf-4516-a223-f5a9a8013869 NetNS:/var/run/netns/393f243e-ae79-48b4-a9f1-8065b5466fa7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000306418}] Aliases:map[]}"
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.48390236Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.493422879Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d0b2980eae5edc737807b66a28754d2e8bb670600fab38a77772622c671e39f2 UID:acbd545b-4ccf-4516-a223-f5a9a8013869 NetNS:/var/run/netns/393f243e-ae79-48b4-a9f1-8065b5466fa7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000306418}] Aliases:map[]}"
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.493574818Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.494316214Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.495144669Z" level=info msg="Ran pod sandbox d0b2980eae5edc737807b66a28754d2e8bb670600fab38a77772622c671e39f2 with infra container: default/busybox/POD" id=4aaa3593-0e85-4bf5-87c2-6ec0bcf1e1c3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.496360827Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7bbffd47-dd17-4233-bc85-276f9a1508d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.496531313Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7bbffd47-dd17-4233-bc85-276f9a1508d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.496569192Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7bbffd47-dd17-4233-bc85-276f9a1508d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.497132768Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=425acf29-cfe4-4f60-9442-849c7018147e name=/runtime.v1.ImageService/PullImage
	Dec 06 09:14:13 addons-101630 crio[770]: time="2025-12-06T09:14:13.498585464Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.884985444Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=425acf29-cfe4-4f60-9442-849c7018147e name=/runtime.v1.ImageService/PullImage
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.885686781Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=98b8a7b2-d9a4-4596-9224-da2a5bffe624 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.887046833Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=124adc92-aa17-4ef3-bdad-f16e3aa7c40d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.890506245Z" level=info msg="Creating container: default/busybox/busybox" id=8479d3d7-ccd4-4a0d-9417-9700fcaa3a44 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.890641098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.895716579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.896191215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.924222484Z" level=info msg="Created container 528ade097099ba992465647d1df50b6a2cf8d070fcbfe55c96d2fdb8c3657d95: default/busybox/busybox" id=8479d3d7-ccd4-4a0d-9417-9700fcaa3a44 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.924843034Z" level=info msg="Starting container: 528ade097099ba992465647d1df50b6a2cf8d070fcbfe55c96d2fdb8c3657d95" id=0518aad6-fd3b-4acc-96d8-d86200ea7429 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:14:15 addons-101630 crio[770]: time="2025-12-06T09:14:15.926603376Z" level=info msg="Started container" PID=6263 containerID=528ade097099ba992465647d1df50b6a2cf8d070fcbfe55c96d2fdb8c3657d95 description=default/busybox/busybox id=0518aad6-fd3b-4acc-96d8-d86200ea7429 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d0b2980eae5edc737807b66a28754d2e8bb670600fab38a77772622c671e39f2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	528ade097099b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   d0b2980eae5ed       busybox                                    default
	48412b93386c3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	b43a181098b64       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	0efcf1711c0c1       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	f53e5b7b950e3       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	a97bcb7cf0006       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 17 seconds ago       Running             gcp-auth                                 0                   05eee4163c54c       gcp-auth-78565c9fb4-hrdcs                  gcp-auth
	953fb247031e3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                20 seconds ago       Running             node-driver-registrar                    0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	027fd0861c8a3       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             21 seconds ago       Running             controller                               0                   ab245d0b98cb1       ingress-nginx-controller-6c8bf45fb-d2mvt   ingress-nginx
	e5e071c542354       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            25 seconds ago       Running             gadget                                   0                   80c0409bac441       gadget-qs5wx                               gadget
	f9a57fccfc33c       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             26 seconds ago       Exited              patch                                    2                   06a232ec35741       gcp-auth-certs-patch-qbf48                 gcp-auth
	79b2f00dfcfb1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              28 seconds ago       Running             registry-proxy                           0                   747b39152df54       registry-proxy-cdw5g                       kube-system
	fb02c57fd629b       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              32 seconds ago       Running             csi-resizer                              0                   01b5606e78051       csi-hostpath-resizer-0                     kube-system
	696827076a771       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   33 seconds ago       Running             csi-external-health-monitor-controller   0                   fd7f7be94c3df       csi-hostpathplugin-d4rl2                   kube-system
	7a4130788df8e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     34 seconds ago       Running             amd-gpu-device-plugin                    0                   0fb6a894912ae       amd-gpu-device-plugin-hz4j9                kube-system
	8b6f64e34b32c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      36 seconds ago       Running             volume-snapshot-controller               0                   103e8ff330d55       snapshot-controller-7d9fbc56b8-rc4h2       kube-system
	41fc749cc8817       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     36 seconds ago       Running             nvidia-device-plugin-ctr                 0                   a4ccc80d877b0       nvidia-device-plugin-daemonset-lv6tv       kube-system
	5e38e2f074767       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   40 seconds ago       Exited              create                                   0                   df93be63af25e       gcp-auth-certs-create-r9qx4                gcp-auth
	e27ecbcda3b56       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago       Running             volume-snapshot-controller               0                   317b469c32e7e       snapshot-controller-7d9fbc56b8-99cb8       kube-system
	fc9564c451d5d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             41 seconds ago       Running             csi-attacher                             0                   641ab723d81f3       csi-hostpath-attacher-0                    kube-system
	37ca696aec063       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             41 seconds ago       Exited              patch                                    2                   c386a25fc3e80       ingress-nginx-admission-patch-6zxgf        ingress-nginx
	7098dc77bd42b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               42 seconds ago       Running             minikube-ingress-dns                     0                   afbb0dcb871c0       kube-ingress-dns-minikube                  kube-system
	6556d8ce037db       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   48 seconds ago       Exited              create                                   0                   80815c703d97e       ingress-nginx-admission-create-ssqfv       ingress-nginx
	c30bb6e013b15       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               48 seconds ago       Running             cloud-spanner-emulator                   0                   9ce2b593c7da9       cloud-spanner-emulator-5bdddb765-b2jhf     default
	09004bd8456e2       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              53 seconds ago       Running             yakd                                     0                   2b618017f3e82       yakd-dashboard-5ff678cb9-pp9k4             yakd-dashboard
	303330f3ac4f5       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             56 seconds ago       Running             local-path-provisioner                   0                   aa2beeffc3a81       local-path-provisioner-648f6765c9-wlsqc    local-path-storage
	b07cd0b15477a       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           58 seconds ago       Running             registry                                 0                   3b14a0fa4acb7       registry-6b586f9694-qh5nl                  kube-system
	3fb8bd4648004       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   79a82da8e42af       metrics-server-85b7d694d7-gj9kl            kube-system
	7324c334d61b7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   82b452159f888       coredns-66bc5c9577-kwpl7                   kube-system
	fc93539bfb63a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   aa5b7f5a3196f       storage-provisioner                        kube-system
	9ac221cf3f54d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   0aeb135ff6f31       kube-proxy-tnjbc                           kube-system
	b12a294179793       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   29402f6cde7e7       kindnet-j6wfg                              kube-system
	6965300427d3a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   75dc39cbbff24       kube-scheduler-addons-101630               kube-system
	a89417715572b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   0368667584c60       kube-apiserver-addons-101630               kube-system
	d16ba02709126       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   a1cf8c7e9fc02       kube-controller-manager-addons-101630      kube-system
	3b636fcb6c702       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   0f8273db6d719       etcd-addons-101630                         kube-system
	
	
	==> coredns [7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192] <==
	[INFO] 10.244.0.16:37600 - 52247 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160407s
	[INFO] 10.244.0.16:37769 - 45205 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136951s
	[INFO] 10.244.0.16:37769 - 44985 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000179747s
	[INFO] 10.244.0.16:40854 - 62472 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000074907s
	[INFO] 10.244.0.16:40854 - 62174 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000121578s
	[INFO] 10.244.0.16:38404 - 7326 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000069282s
	[INFO] 10.244.0.16:38404 - 7144 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000110038s
	[INFO] 10.244.0.16:40380 - 46381 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000057968s
	[INFO] 10.244.0.16:40380 - 46783 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000106193s
	[INFO] 10.244.0.16:49280 - 9988 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104122s
	[INFO] 10.244.0.16:49280 - 9849 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141896s
	[INFO] 10.244.0.22:39743 - 21268 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175727s
	[INFO] 10.244.0.22:58835 - 33651 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272397s
	[INFO] 10.244.0.22:38514 - 4988 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166523s
	[INFO] 10.244.0.22:54341 - 39071 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000228006s
	[INFO] 10.244.0.22:41306 - 58336 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121835s
	[INFO] 10.244.0.22:51554 - 6704 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016081s
	[INFO] 10.244.0.22:54124 - 49175 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.007034347s
	[INFO] 10.244.0.22:38539 - 35111 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.008098558s
	[INFO] 10.244.0.22:59178 - 62926 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006869018s
	[INFO] 10.244.0.22:36716 - 28006 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007666058s
	[INFO] 10.244.0.22:34393 - 64571 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005629973s
	[INFO] 10.244.0.22:51183 - 62065 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007680924s
	[INFO] 10.244.0.22:54470 - 53121 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001958057s
	[INFO] 10.244.0.22:42103 - 6854 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001019617s
	
	
	==> describe nodes <==
	Name:               addons-101630
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-101630
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-101630
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-101630
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-101630"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101630
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:14:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:14:06 +0000   Sat, 06 Dec 2025 09:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:14:06 +0000   Sat, 06 Dec 2025 09:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:14:06 +0000   Sat, 06 Dec 2025 09:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:14:06 +0000   Sat, 06 Dec 2025 09:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-101630
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                c15a275c-2a6b-449e-a9b4-51c1acabce68
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-b2jhf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  gadget                      gadget-qs5wx                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  gcp-auth                    gcp-auth-78565c9fb4-hrdcs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-d2mvt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         74s
	  kube-system                 amd-gpu-device-plugin-hz4j9                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 coredns-66bc5c9577-kwpl7                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     74s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpathplugin-d4rl2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 etcd-addons-101630                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         80s
	  kube-system                 kindnet-j6wfg                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-addons-101630                250m (3%)     0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-addons-101630       200m (2%)     0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-proxy-tnjbc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-addons-101630                100m (1%)     0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 metrics-server-85b7d694d7-gj9kl             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         74s
	  kube-system                 nvidia-device-plugin-daemonset-lv6tv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 registry-6b586f9694-qh5nl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-creds-764b6fb674-qrdwx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-proxy-cdw5g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 snapshot-controller-7d9fbc56b8-99cb8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 snapshot-controller-7d9fbc56b8-rc4h2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  local-path-storage          local-path-provisioner-648f6765c9-wlsqc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pp9k4              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 73s   kube-proxy       
	  Normal  Starting                 80s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s   kubelet          Node addons-101630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s   kubelet          Node addons-101630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s   kubelet          Node addons-101630 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           76s   node-controller  Node addons-101630 event: Registered Node addons-101630 in Controller
	  Normal  NodeReady                63s   kubelet          Node addons-101630 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 76 5d 69 f0 c7 08 06
	[Dec 6 09:01] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 de db 4c 60 13 08 06
	[  +0.839561] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 9b 83 77 35 c5 08 06
	[  +0.040816] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 c6 d8 79 12 72 08 06
	[  +4.247755] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 c9 0d 30 2d e4 08 06
	[Dec 6 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000012] ll header: 00000000: ff ff ff ff ff ff da c4 b3 f4 5e 17 08 06
	[  +0.237438] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 55 03 f1 e9 ad 08 06
	[  +0.034324] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 85 2c e4 7f a4 08 06
	[  +5.119232] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 eb 23 c0 49 37 08 06
	[ +31.044104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 40 9a f6 09 8f 08 06
	[  +0.864383] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 17 0f 72 35 40 08 06
	[  +0.051841] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	
	
	==> etcd [3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f] <==
	{"level":"warn","ts":"2025-12-06T09:13:01.248424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.254701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.262602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.272562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.279874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.286189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.293849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.300603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.308033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.315761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.322572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.328632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.335600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.343158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.349814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.366317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.372953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.379877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:01.430028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:11.382478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:11.389099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.831314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.842921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.857778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:38.866030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a97bcb7cf000697c1f5740ab1cdc12265652a631bd71b13b9889a5f102ca22c8] <==
	2025/12/06 09:14:06 GCP Auth Webhook started!
	2025/12/06 09:14:12 Ready to marshal response ...
	2025/12/06 09:14:12 Ready to write response ...
	2025/12/06 09:14:13 Ready to marshal response ...
	2025/12/06 09:14:13 Ready to write response ...
	2025/12/06 09:14:13 Ready to marshal response ...
	2025/12/06 09:14:13 Ready to write response ...
	
	
	==> kernel <==
	 09:14:24 up  1:56,  0 user,  load average: 1.46, 1.60, 15.86
	Linux addons-101630 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56] <==
	I1206 09:13:10.723517       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:13:10.723541       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:13:10.723555       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:13:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:13:11.106606       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:13:11.207296       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:13:11.207314       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:13:11.323822       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:13:11.623736       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:13:11.623763       1 metrics.go:72] Registering metrics
	I1206 09:13:11.623840       1 controller.go:711] "Syncing nftables rules"
	I1206 09:13:20.928816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:13:20.928897       1 main.go:301] handling current node
	I1206 09:13:30.928584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:13:30.928641       1 main.go:301] handling current node
	I1206 09:13:40.928563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:13:40.928601       1 main.go:301] handling current node
	I1206 09:13:50.927573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:13:50.927613       1 main.go:301] handling current node
	I1206 09:14:00.927634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:14:00.927671       1 main.go:301] handling current node
	I1206 09:14:10.927391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:14:10.927430       1 main.go:301] handling current node
	I1206 09:14:20.930397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 09:14:20.930492       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b] <==
	W1206 09:13:21.341268       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.149.170:443: connect: connection refused
	E1206 09:13:21.341322       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.149.170:443: connect: connection refused" logger="UnhandledError"
	W1206 09:13:21.341431       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.149.170:443: connect: connection refused
	E1206 09:13:21.341488       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.149.170:443: connect: connection refused" logger="UnhandledError"
	W1206 09:13:21.361510       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.149.170:443: connect: connection refused
	E1206 09:13:21.361550       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.149.170:443: connect: connection refused" logger="UnhandledError"
	W1206 09:13:21.368656       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.149.170:443: connect: connection refused
	E1206 09:13:21.368703       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.149.170:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.348480       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	W1206 09:13:24.348522       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 09:13:24.348591       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 09:13:24.348848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.354877       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.375644       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:24.416852       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.4.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.4.34:443: connect: connection refused" logger="UnhandledError"
	I1206 09:13:24.523413       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 09:13:38.831307       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1206 09:13:38.842954       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 09:13:38.857718       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 09:13:38.865977       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1206 09:14:22.319173       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35032: use of closed network connection
	E1206 09:14:22.469729       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35062: use of closed network connection
	
	
	==> kube-controller-manager [d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add] <==
	I1206 09:13:08.813045       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:13:08.813112       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:13:08.813200       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:13:08.813229       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:13:08.813306       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:13:08.813617       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:13:08.813638       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:13:08.813685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:13:08.813715       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:13:08.813735       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:13:08.815436       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:13:08.816659       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:13:08.817980       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:13:08.818014       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:08.818018       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:13:08.818039       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:13:08.822528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:13:08.833208       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 09:13:10.176266       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1206 09:13:23.765910       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1206 09:13:38.823656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 09:13:38.823735       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 09:13:38.846011       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 09:13:38.924202       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:38.946431       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d] <==
	I1206 09:13:10.525825       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:13:10.612283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:13:10.713419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:13:10.713513       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:13:10.713617       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:13:10.741353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:13:10.741427       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:13:10.747176       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:13:10.747672       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:13:10.747722       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:10.749064       1 config.go:200] "Starting service config controller"
	I1206 09:13:10.749160       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:13:10.749090       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:13:10.749276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:13:10.749162       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:13:10.749378       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:13:10.749232       1 config.go:309] "Starting node config controller"
	I1206 09:13:10.749499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:13:10.749509       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:13:10.849478       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:13:10.849511       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:13:10.849449       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480] <==
	E1206 09:13:01.831340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:13:01.831503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:13:01.831507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:13:01.831527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:13:01.831682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:13:01.831770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:13:01.831866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:13:01.831871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:13:01.831870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:13:01.831946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:13:01.831989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:13:01.832028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:13:01.832028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:13:02.662642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:13:02.673647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:13:02.706865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:13:02.794420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:13:02.798492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:13:02.802373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:13:02.879227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:13:02.931891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:13:02.986153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:13:03.012309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:13:03.091092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:13:05.225890       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:13:50 addons-101630 kubelet[1295]: I1206 09:13:50.465584    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hz4j9" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:13:50 addons-101630 kubelet[1295]: I1206 09:13:50.476993    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-hz4j9" podStartSLOduration=1.687386965 podStartE2EDuration="29.476973106s" podCreationTimestamp="2025-12-06 09:13:21 +0000 UTC" firstStartedPulling="2025-12-06 09:13:21.805241159 +0000 UTC m=+17.652123911" lastFinishedPulling="2025-12-06 09:13:49.594827287 +0000 UTC m=+45.441710052" observedRunningTime="2025-12-06 09:13:50.475696916 +0000 UTC m=+46.322579713" watchObservedRunningTime="2025-12-06 09:13:50.476973106 +0000 UTC m=+46.323855879"
	Dec 06 09:13:51 addons-101630 kubelet[1295]: I1206 09:13:51.469681    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hz4j9" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:13:53 addons-101630 kubelet[1295]: E1206 09:13:53.226442    1295 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 06 09:13:53 addons-101630 kubelet[1295]: E1206 09:13:53.226547    1295 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bf9406e-6469-4c0a-b3d1-35797ae72deb-gcr-creds podName:3bf9406e-6469-4c0a-b3d1-35797ae72deb nodeName:}" failed. No retries permitted until 2025-12-06 09:14:25.226530009 +0000 UTC m=+81.073412762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/3bf9406e-6469-4c0a-b3d1-35797ae72deb-gcr-creds") pod "registry-creds-764b6fb674-qrdwx" (UID: "3bf9406e-6469-4c0a-b3d1-35797ae72deb") : secret "registry-creds-gcr" not found
	Dec 06 09:13:56 addons-101630 kubelet[1295]: I1206 09:13:56.492430    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cdw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:13:56 addons-101630 kubelet[1295]: I1206 09:13:56.501717    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=16.564446107 podStartE2EDuration="46.501693914s" podCreationTimestamp="2025-12-06 09:13:10 +0000 UTC" firstStartedPulling="2025-12-06 09:13:21.806898143 +0000 UTC m=+17.653780923" lastFinishedPulling="2025-12-06 09:13:51.744145964 +0000 UTC m=+47.591028730" observedRunningTime="2025-12-06 09:13:52.498207884 +0000 UTC m=+48.345090658" watchObservedRunningTime="2025-12-06 09:13:56.501693914 +0000 UTC m=+52.348576688"
	Dec 06 09:13:57 addons-101630 kubelet[1295]: I1206 09:13:57.236635    1295 scope.go:117] "RemoveContainer" containerID="4509e91df530b9a548b55e358c821829e16939d25461bc942a8b9e04cc6f4fa4"
	Dec 06 09:13:57 addons-101630 kubelet[1295]: I1206 09:13:57.494933    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cdw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:13:58 addons-101630 kubelet[1295]: I1206 09:13:58.499838    1295 scope.go:117] "RemoveContainer" containerID="4509e91df530b9a548b55e358c821829e16939d25461bc942a8b9e04cc6f4fa4"
	Dec 06 09:13:58 addons-101630 kubelet[1295]: I1206 09:13:58.510448    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-cdw5g" podStartSLOduration=3.474539832 podStartE2EDuration="37.510427177s" podCreationTimestamp="2025-12-06 09:13:21 +0000 UTC" firstStartedPulling="2025-12-06 09:13:21.883950719 +0000 UTC m=+17.730833484" lastFinishedPulling="2025-12-06 09:13:55.919838072 +0000 UTC m=+51.766720829" observedRunningTime="2025-12-06 09:13:56.501165494 +0000 UTC m=+52.348048268" watchObservedRunningTime="2025-12-06 09:13:58.510427177 +0000 UTC m=+54.357309951"
	Dec 06 09:13:59 addons-101630 kubelet[1295]: I1206 09:13:59.541851    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-qs5wx" podStartSLOduration=17.150021994 podStartE2EDuration="49.541827737s" podCreationTimestamp="2025-12-06 09:13:10 +0000 UTC" firstStartedPulling="2025-12-06 09:13:26.459167163 +0000 UTC m=+22.306049918" lastFinishedPulling="2025-12-06 09:13:58.850972907 +0000 UTC m=+54.697855661" observedRunningTime="2025-12-06 09:13:59.541088783 +0000 UTC m=+55.387971558" watchObservedRunningTime="2025-12-06 09:13:59.541827737 +0000 UTC m=+55.388710511"
	Dec 06 09:13:59 addons-101630 kubelet[1295]: I1206 09:13:59.681945    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjgw7\" (UniqueName: \"kubernetes.io/projected/0ae9cb91-d466-4daf-9fc5-24a89fc4558d-kube-api-access-rjgw7\") pod \"0ae9cb91-d466-4daf-9fc5-24a89fc4558d\" (UID: \"0ae9cb91-d466-4daf-9fc5-24a89fc4558d\") "
	Dec 06 09:13:59 addons-101630 kubelet[1295]: I1206 09:13:59.685108    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae9cb91-d466-4daf-9fc5-24a89fc4558d-kube-api-access-rjgw7" (OuterVolumeSpecName: "kube-api-access-rjgw7") pod "0ae9cb91-d466-4daf-9fc5-24a89fc4558d" (UID: "0ae9cb91-d466-4daf-9fc5-24a89fc4558d"). InnerVolumeSpecName "kube-api-access-rjgw7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 09:13:59 addons-101630 kubelet[1295]: I1206 09:13:59.782670    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjgw7\" (UniqueName: \"kubernetes.io/projected/0ae9cb91-d466-4daf-9fc5-24a89fc4558d-kube-api-access-rjgw7\") on node \"addons-101630\" DevicePath \"\""
	Dec 06 09:14:00 addons-101630 kubelet[1295]: I1206 09:14:00.513350    1295 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a232ec35741984d67007f006e4c04cfe9dc997dfa0f29a9be150cce0f3374d"
	Dec 06 09:14:02 addons-101630 kubelet[1295]: I1206 09:14:02.531915    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-d2mvt" podStartSLOduration=27.49553042 podStartE2EDuration="52.531893303s" podCreationTimestamp="2025-12-06 09:13:10 +0000 UTC" firstStartedPulling="2025-12-06 09:13:37.32636772 +0000 UTC m=+33.173250477" lastFinishedPulling="2025-12-06 09:14:02.36273059 +0000 UTC m=+58.209613360" observedRunningTime="2025-12-06 09:14:02.531580939 +0000 UTC m=+58.378463723" watchObservedRunningTime="2025-12-06 09:14:02.531893303 +0000 UTC m=+58.378776077"
	Dec 06 09:14:06 addons-101630 kubelet[1295]: I1206 09:14:06.549314    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-hrdcs" podStartSLOduration=36.607517261 podStartE2EDuration="49.54928936s" podCreationTimestamp="2025-12-06 09:13:17 +0000 UTC" firstStartedPulling="2025-12-06 09:13:53.516921778 +0000 UTC m=+49.363804536" lastFinishedPulling="2025-12-06 09:14:06.458693878 +0000 UTC m=+62.305576635" observedRunningTime="2025-12-06 09:14:06.548865373 +0000 UTC m=+62.395748147" watchObservedRunningTime="2025-12-06 09:14:06.54928936 +0000 UTC m=+62.396172134"
	Dec 06 09:14:08 addons-101630 kubelet[1295]: I1206 09:14:08.280908    1295 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 06 09:14:08 addons-101630 kubelet[1295]: I1206 09:14:08.280968    1295 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 06 09:14:10 addons-101630 kubelet[1295]: I1206 09:14:10.582600    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-d4rl2" podStartSLOduration=1.6622915919999999 podStartE2EDuration="49.582578129s" podCreationTimestamp="2025-12-06 09:13:21 +0000 UTC" firstStartedPulling="2025-12-06 09:13:21.805948674 +0000 UTC m=+17.652831427" lastFinishedPulling="2025-12-06 09:14:09.726235203 +0000 UTC m=+65.573117964" observedRunningTime="2025-12-06 09:14:10.581014713 +0000 UTC m=+66.427897522" watchObservedRunningTime="2025-12-06 09:14:10.582578129 +0000 UTC m=+66.429460903"
	Dec 06 09:14:13 addons-101630 kubelet[1295]: I1206 09:14:13.179521    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/acbd545b-4ccf-4516-a223-f5a9a8013869-gcp-creds\") pod \"busybox\" (UID: \"acbd545b-4ccf-4516-a223-f5a9a8013869\") " pod="default/busybox"
	Dec 06 09:14:13 addons-101630 kubelet[1295]: I1206 09:14:13.179600    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brm2s\" (UniqueName: \"kubernetes.io/projected/acbd545b-4ccf-4516-a223-f5a9a8013869-kube-api-access-brm2s\") pod \"busybox\" (UID: \"acbd545b-4ccf-4516-a223-f5a9a8013869\") " pod="default/busybox"
	Dec 06 09:14:16 addons-101630 kubelet[1295]: I1206 09:14:16.239142    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1735e353-33c5-4a74-8693-30ce6b393308" path="/var/lib/kubelet/pods/1735e353-33c5-4a74-8693-30ce6b393308/volumes"
	Dec 06 09:14:16 addons-101630 kubelet[1295]: I1206 09:14:16.604353    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.214680873 podStartE2EDuration="3.604330057s" podCreationTimestamp="2025-12-06 09:14:13 +0000 UTC" firstStartedPulling="2025-12-06 09:14:13.496818509 +0000 UTC m=+69.343701266" lastFinishedPulling="2025-12-06 09:14:15.886467696 +0000 UTC m=+71.733350450" observedRunningTime="2025-12-06 09:14:16.603203979 +0000 UTC m=+72.450086755" watchObservedRunningTime="2025-12-06 09:14:16.604330057 +0000 UTC m=+72.451212831"
	
	
	==> storage-provisioner [fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157] <==
	W1206 09:13:59.996119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:01.999693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:02.004475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:04.007734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:04.012728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:06.016028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:06.055269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:08.058347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:08.061937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:10.065804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:10.069703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:12.073039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:12.079498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:14.082697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:14.086443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:16.089884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:16.094480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:18.097836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:18.101886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:20.105022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:20.110680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:22.114199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:22.118118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:24.121020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:24.124853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-101630 -n addons-101630
helpers_test.go:269: (dbg) Run:  kubectl --context addons-101630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-qbf48 ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf registry-creds-764b6fb674-qrdwx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-101630 describe pod gcp-auth-certs-patch-qbf48 ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf registry-creds-764b6fb674-qrdwx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-101630 describe pod gcp-auth-certs-patch-qbf48 ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf registry-creds-764b6fb674-qrdwx: exit status 1 (60.92771ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-qbf48" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-ssqfv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6zxgf" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-qrdwx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-101630 describe pod gcp-auth-certs-patch-qbf48 ingress-nginx-admission-create-ssqfv ingress-nginx-admission-patch-6zxgf registry-creds-764b6fb674-qrdwx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable headlamp --alsologtostderr -v=1: exit status 11 (241.173809ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:25.008008  513620 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:25.008182  513620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:25.008193  513620 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:25.008197  513620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:25.008397  513620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:25.008669  513620 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:25.008988  513620 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:25.009008  513620 addons.go:622] checking whether the cluster is paused
	I1206 09:14:25.009086  513620 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:25.009103  513620 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:25.009494  513620 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:25.028628  513620 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:25.028683  513620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:25.045786  513620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:25.138502  513620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:25.138609  513620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:25.168319  513620 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:25.168343  513620 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:25.168347  513620 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:25.168351  513620 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:25.168354  513620 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:25.168358  513620 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:25.168361  513620 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:25.168364  513620 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:25.168366  513620 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:25.168371  513620 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:25.168374  513620 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:25.168377  513620 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:25.168380  513620 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:25.168383  513620 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:25.168386  513620 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:25.168390  513620 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:25.168393  513620 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:25.168399  513620 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:25.168401  513620 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:25.168404  513620 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:25.168409  513620 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:25.168412  513620 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:25.168415  513620 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:25.168420  513620 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:25.168433  513620 cri.go:89] found id: ""
	I1206 09:14:25.168494  513620 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:25.183089  513620 out.go:203] 
	W1206 09:14:25.184208  513620 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:25.184231  513620 out.go:285] * 
	* 
	W1206 09:14:25.187222  513620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:25.188430  513620 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-b2jhf" [94a135c8-bc31-4f19-9695-fd9cd4e85be5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002764954s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (250.919993ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:39.337025  514952 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:39.337155  514952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:39.337164  514952 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:39.337168  514952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:39.337393  514952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:39.337699  514952 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:39.337988  514952 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:39.338006  514952 addons.go:622] checking whether the cluster is paused
	I1206 09:14:39.338079  514952 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:39.338096  514952 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:39.338483  514952 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:39.356950  514952 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:39.357015  514952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:39.376366  514952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:39.471200  514952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:39.471264  514952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:39.504130  514952 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:39.504149  514952 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:39.504154  514952 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:39.504157  514952 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:39.504160  514952 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:39.504163  514952 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:39.504166  514952 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:39.504175  514952 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:39.504178  514952 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:39.504183  514952 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:39.504190  514952 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:39.504193  514952 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:39.504196  514952 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:39.504199  514952 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:39.504202  514952 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:39.504211  514952 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:39.504216  514952 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:39.504220  514952 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:39.504223  514952 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:39.504226  514952 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:39.504232  514952 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:39.504236  514952 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:39.504239  514952 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:39.504242  514952 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:39.504245  514952 cri.go:89] found id: ""
	I1206 09:14:39.504281  514952 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:39.518752  514952 out.go:203] 
	W1206 09:14:39.519976  514952 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:39.519994  514952 out.go:285] * 
	* 
	W1206 09:14:39.523191  514952 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:39.524311  514952 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-101630 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-101630 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101630 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [74388b20-d850-42f0-8051-2188f036447b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [74388b20-d850-42f0-8051-2188f036447b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [74388b20-d850-42f0-8051-2188f036447b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003697587s
addons_test.go:967: (dbg) Run:  kubectl --context addons-101630 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 ssh "cat /opt/local-path-provisioner/pvc-20d4bb10-c0ec-46a8-962a-05dd97216bc2_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-101630 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-101630 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (260.04993ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:39.877952  515155 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:39.878287  515155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:39.878303  515155 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:39.878309  515155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:39.878648  515155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:39.878943  515155 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:39.879245  515155 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:39.879265  515155 addons.go:622] checking whether the cluster is paused
	I1206 09:14:39.879339  515155 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:39.879355  515155 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:39.879772  515155 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:39.898953  515155 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:39.899026  515155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:39.922061  515155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:40.018311  515155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:40.018410  515155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:40.049489  515155 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:40.049523  515155 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:40.049530  515155 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:40.049535  515155 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:40.049540  515155 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:40.049546  515155 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:40.049550  515155 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:40.049555  515155 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:40.049560  515155 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:40.049567  515155 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:40.049576  515155 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:40.049580  515155 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:40.049585  515155 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:40.049588  515155 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:40.049590  515155 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:40.049601  515155 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:40.049607  515155 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:40.049612  515155 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:40.049615  515155 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:40.049618  515155 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:40.049621  515155 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:40.049624  515155 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:40.049627  515155 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:40.049630  515155 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:40.049633  515155 cri.go:89] found id: ""
	I1206 09:14:40.049677  515155 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:40.069159  515155 out.go:203] 
	W1206 09:14:40.070418  515155 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:40.070446  515155 out.go:285] * 
	* 
	W1206 09:14:40.074429  515155 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:40.076256  515155 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (12.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-lv6tv" [b89ce175-14f9-4a10-9fdb-43d64edf8373] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003613168s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (237.38773ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:27.776497  513708 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:27.776770  513708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:27.776779  513708 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:27.776784  513708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:27.776947  513708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:27.777200  513708 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:27.777551  513708 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:27.777575  513708 addons.go:622] checking whether the cluster is paused
	I1206 09:14:27.777661  513708 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:27.777678  513708 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:27.778030  513708 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:27.795748  513708 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:27.795797  513708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:27.812733  513708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:27.904869  513708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:27.904952  513708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:27.934239  513708 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:27.934259  513708 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:27.934264  513708 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:27.934268  513708 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:27.934270  513708 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:27.934275  513708 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:27.934278  513708 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:27.934281  513708 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:27.934284  513708 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:27.934310  513708 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:27.934315  513708 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:27.934319  513708 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:27.934324  513708 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:27.934332  513708 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:27.934343  513708 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:27.934352  513708 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:27.934355  513708 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:27.934359  513708 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:27.934362  513708 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:27.934365  513708 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:27.934368  513708 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:27.934370  513708 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:27.934373  513708 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:27.934376  513708 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:27.934379  513708 cri.go:89] found id: ""
	I1206 09:14:27.934427  513708 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:27.948222  513708 out.go:203] 
	W1206 09:14:27.949101  513708 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:27.949116  513708 out.go:285] * 
	* 
	W1206 09:14:27.952125  513708 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:27.953253  513708 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-pp9k4" [f02eb913-bb6d-4fc5-91b5-ff7977b8ed43] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004310811s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable yakd --alsologtostderr -v=1: exit status 11 (253.050691ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:30.263921  513951 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:30.264221  513951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:30.264236  513951 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:30.264241  513951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:30.264499  513951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:30.264770  513951 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:30.265129  513951 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:30.265150  513951 addons.go:622] checking whether the cluster is paused
	I1206 09:14:30.265236  513951 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:30.265253  513951 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:30.265627  513951 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:30.283835  513951 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:30.283896  513951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:30.302052  513951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:30.395598  513951 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:30.395669  513951 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:30.426406  513951 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:30.426427  513951 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:30.426431  513951 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:30.426434  513951 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:30.426437  513951 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:30.426440  513951 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:30.426443  513951 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:30.426446  513951 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:30.426449  513951 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:30.426471  513951 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:30.426476  513951 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:30.426481  513951 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:30.426485  513951 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:30.426489  513951 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:30.426509  513951 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:30.426520  513951 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:30.426524  513951 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:30.426528  513951 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:30.426531  513951 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:30.426533  513951 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:30.426536  513951 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:30.426539  513951 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:30.426541  513951 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:30.426544  513951 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:30.426549  513951 cri.go:89] found id: ""
	I1206 09:14:30.426587  513951 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:30.441742  513951 out.go:203] 
	W1206 09:14:30.443270  513951 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:30.443288  513951 out.go:285] * 
	* 
	W1206 09:14:30.446290  513951 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:30.447431  513951 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-hz4j9" [3ac2ab95-fb88-4d29-ae32-74adec71db58] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004004043s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-101630 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-101630 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (239.815735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:28.777684  513838 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:28.777822  513838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:28.777833  513838 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:28.777837  513838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:28.778066  513838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:14:28.778416  513838 mustload.go:66] Loading cluster: addons-101630
	I1206 09:14:28.778780  513838 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:28.778804  513838 addons.go:622] checking whether the cluster is paused
	I1206 09:14:28.778884  513838 config.go:182] Loaded profile config "addons-101630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:14:28.778902  513838 host.go:66] Checking if "addons-101630" exists ...
	I1206 09:14:28.779279  513838 cli_runner.go:164] Run: docker container inspect addons-101630 --format={{.State.Status}}
	I1206 09:14:28.797428  513838 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:28.797520  513838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101630
	I1206 09:14:28.814628  513838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/addons-101630/id_rsa Username:docker}
	I1206 09:14:28.908479  513838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:14:28.908567  513838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:14:28.937865  513838 cri.go:89] found id: "48412b93386c339a85f28cec1bb50f941ffc900ed378cf48b1db9b4b4627e469"
	I1206 09:14:28.937892  513838 cri.go:89] found id: "b43a181098b64a8c02ee66a0e3d8e9c116b15b3b42b8cdc1fc479cc146feb329"
	I1206 09:14:28.937907  513838 cri.go:89] found id: "0efcf1711c0c1913174d2e831066765c94387626c1bb5d73a1fa84f343cc5d7d"
	I1206 09:14:28.937912  513838 cri.go:89] found id: "f53e5b7b950e3d700477df69325ce7aaef1a31032ce64214acf80357d228351d"
	I1206 09:14:28.937917  513838 cri.go:89] found id: "953fb247031e30eb7b2a85c6cedda9cbd0ac502cae68d3679258d93f2e766b40"
	I1206 09:14:28.937923  513838 cri.go:89] found id: "79b2f00dfcfb14d65435e1e091f8536d9ce60f2f4584fae8558f411bf0eb0d00"
	I1206 09:14:28.937927  513838 cri.go:89] found id: "fb02c57fd629bcc9dd60528df6cb90af6266191512e8f2815fd1b7b8dd84a867"
	I1206 09:14:28.937932  513838 cri.go:89] found id: "696827076a7717a7a7d48cc66a0259ad7f022d51feda748cfca676a0cb2fc8c2"
	I1206 09:14:28.937936  513838 cri.go:89] found id: "7a4130788df8e967918b22ca4ff37fd155d8cac714274073e92ecf98ac135514"
	I1206 09:14:28.937945  513838 cri.go:89] found id: "8b6f64e34b32c72df9178e4d63ad43e085ff3ba0ad44adf46c53fc394bce184b"
	I1206 09:14:28.937952  513838 cri.go:89] found id: "41fc749cc8817b38648b7ceac17c1ef3528623064afdf2beeaf91c88c343f63d"
	I1206 09:14:28.937954  513838 cri.go:89] found id: "e27ecbcda3b56de801a2337b718e80b641b2350f1bca00404848e6131b1d10c7"
	I1206 09:14:28.937957  513838 cri.go:89] found id: "fc9564c451d5df251396a2349c6683b4a0185b6b46b0a22d02638ad5efa5756e"
	I1206 09:14:28.937960  513838 cri.go:89] found id: "7098dc77bd42b437daee0f48fbe2255f474de492a8cd2bea6b738aac7fa5daee"
	I1206 09:14:28.937964  513838 cri.go:89] found id: "b07cd0b15477aa2598ffbe838f807539eb7fe9ea03cd973ae318fec954f993d3"
	I1206 09:14:28.937971  513838 cri.go:89] found id: "3fb8bd4648004030d1568cb96b38a40b0dc84dd1997fe1d09eebfc5e9fc00d59"
	I1206 09:14:28.937976  513838 cri.go:89] found id: "7324c334d61b7a2c5d5f7897767dbbfe0ee7dc57bc4e912e99b1684d79247192"
	I1206 09:14:28.937981  513838 cri.go:89] found id: "fc93539bfb63a5f6096f6a3b18b5ea752fe278051ddf340e3c3aaa64f01ae157"
	I1206 09:14:28.937984  513838 cri.go:89] found id: "9ac221cf3f54db42c900d0deb50a82327332f30022d10b5db554c6ba8314dc4d"
	I1206 09:14:28.937987  513838 cri.go:89] found id: "b12a294179793c603ea0aa41a36b72084253a802eca5054d434fcc744c5deb56"
	I1206 09:14:28.937992  513838 cri.go:89] found id: "6965300427d3a92b105fc6716cf425a4fdfdbf7634182d43cd46dea2abdf3480"
	I1206 09:14:28.937994  513838 cri.go:89] found id: "a89417715572bb5b5a530d44de2f7c9e20320bb4e9b0695798dec5e95b25d91b"
	I1206 09:14:28.937997  513838 cri.go:89] found id: "d16ba027091267b1239e9aa18e936d2d1682508bb88e5d330368070c481e3add"
	I1206 09:14:28.937999  513838 cri.go:89] found id: "3b636fcb6c7022aefe591b2bb3af1ca0970f71e1b1c6d76aa28987d5705c3e2f"
	I1206 09:14:28.938002  513838 cri.go:89] found id: ""
	I1206 09:14:28.938041  513838 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:14:28.952102  513838 out.go:203] 
	W1206 09:14:28.953041  513838 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:14:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:14:28.953056  513838 out.go:285] * 
	* 
	W1206 09:14:28.956017  513838 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:14:28.956948  513838 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-101630 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 config get cpus: exit status 14 (111.635363ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 config get cpus
functional_test.go:1225: expected config error for "out/minikube-linux-amd64 -p functional-326325 config get cpus" to be -""- but got *"E1206 09:23:42.787068  554582 logFile.go:53] failed to close the audit log: invalid argument"*
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 config get cpus: exit status 14 (82.026718ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.39s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-933492 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-933492 --output=json --user=testUser: exit status 80 (2.388280989s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db643da3-74e2-4b0b-9321-e898ece555d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-933492 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9289aeef-1cd9-47f4-b086-d2c850058a0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T09:33:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"e205e354-d1be-44b1-8aa2-6b7226edd05c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-933492 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.17s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-933492 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-933492 --output=json --user=testUser: exit status 80 (2.174465186s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"361b56c6-5f1c-47a7-92bc-698c50c46c04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-933492 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1eb5b45e-7512-4d61-9388-97d78f650f82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T09:33:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"8cc6e74f-de54-4219-841f-7a269d3de89d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-933492 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.17s)

                                                
                                    
x
+
TestPause/serial/Pause (5.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-137950 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-137950 --alsologtostderr -v=5: exit status 80 (1.850584859s)

                                                
                                                
-- stdout --
	* Pausing node pause-137950 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:47:06.127122  711506 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:47:06.127231  711506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:47:06.127239  711506 out.go:374] Setting ErrFile to fd 2...
	I1206 09:47:06.127243  711506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:47:06.127442  711506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:47:06.127706  711506 out.go:368] Setting JSON to false
	I1206 09:47:06.127726  711506 mustload.go:66] Loading cluster: pause-137950
	I1206 09:47:06.128134  711506 config.go:182] Loaded profile config "pause-137950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:47:06.128694  711506 cli_runner.go:164] Run: docker container inspect pause-137950 --format={{.State.Status}}
	I1206 09:47:06.146151  711506 host.go:66] Checking if "pause-137950" exists ...
	I1206 09:47:06.146408  711506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:47:06.204548  711506 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-06 09:47:06.194809374 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:47:06.205198  711506 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-137950 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:47:06.269041  711506 out.go:179] * Pausing node pause-137950 ... 
	I1206 09:47:06.288763  711506 host.go:66] Checking if "pause-137950" exists ...
	I1206 09:47:06.289102  711506 ssh_runner.go:195] Run: systemctl --version
	I1206 09:47:06.289155  711506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137950
	I1206 09:47:06.308106  711506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/pause-137950/id_rsa Username:docker}
	I1206 09:47:06.403115  711506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:47:06.418077  711506 pause.go:52] kubelet running: true
	I1206 09:47:06.418148  711506 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:47:06.552317  711506 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:47:06.552405  711506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:47:06.621860  711506 cri.go:89] found id: "d510c6b2fe97869dbc63deaec59ea1bcc7e242180e112d0b933cc14d427eb0d7"
	I1206 09:47:06.621882  711506 cri.go:89] found id: "9084044e377dbea24c01a6f7b83522ce0cca64c749166b0ed4b1dd1fb9b67766"
	I1206 09:47:06.621886  711506 cri.go:89] found id: "5636af128e1d708b302dceab1c741afccd110a21e7b5be3cf8a0fabb030253ab"
	I1206 09:47:06.621889  711506 cri.go:89] found id: "8ca3492af015eb56f5ee9789e7ab502a9333afb2a9f9fdb6d27eee53e3d3671d"
	I1206 09:47:06.621892  711506 cri.go:89] found id: "1198f6d7ee136c379639a8be91363f8629cd112ef08874ca70d1e6b367cf027e"
	I1206 09:47:06.621896  711506 cri.go:89] found id: "d58bb44be0561901a4d07f8c1846c400377d8d00060e7d090693ad0092278bb3"
	I1206 09:47:06.621899  711506 cri.go:89] found id: "14a6067fda89cc7a9c2cfa81c10a8cf8fd8f7cc6f9b0ba4ecd8fe602f13f3c55"
	I1206 09:47:06.621901  711506 cri.go:89] found id: ""
	I1206 09:47:06.621944  711506 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:47:06.633905  711506 retry.go:31] will retry after 373.265484ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:47:06Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:47:07.007448  711506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:47:07.022748  711506 pause.go:52] kubelet running: false
	I1206 09:47:07.022813  711506 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:47:07.134848  711506 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:47:07.134937  711506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:47:07.209952  711506 cri.go:89] found id: "d510c6b2fe97869dbc63deaec59ea1bcc7e242180e112d0b933cc14d427eb0d7"
	I1206 09:47:07.209981  711506 cri.go:89] found id: "9084044e377dbea24c01a6f7b83522ce0cca64c749166b0ed4b1dd1fb9b67766"
	I1206 09:47:07.209987  711506 cri.go:89] found id: "5636af128e1d708b302dceab1c741afccd110a21e7b5be3cf8a0fabb030253ab"
	I1206 09:47:07.209992  711506 cri.go:89] found id: "8ca3492af015eb56f5ee9789e7ab502a9333afb2a9f9fdb6d27eee53e3d3671d"
	I1206 09:47:07.210012  711506 cri.go:89] found id: "1198f6d7ee136c379639a8be91363f8629cd112ef08874ca70d1e6b367cf027e"
	I1206 09:47:07.210018  711506 cri.go:89] found id: "d58bb44be0561901a4d07f8c1846c400377d8d00060e7d090693ad0092278bb3"
	I1206 09:47:07.210022  711506 cri.go:89] found id: "14a6067fda89cc7a9c2cfa81c10a8cf8fd8f7cc6f9b0ba4ecd8fe602f13f3c55"
	I1206 09:47:07.210027  711506 cri.go:89] found id: ""
	I1206 09:47:07.210079  711506 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:47:07.222838  711506 retry.go:31] will retry after 407.770848ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:47:07Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:47:07.631259  711506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:47:07.646279  711506 pause.go:52] kubelet running: false
	I1206 09:47:07.646344  711506 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:47:07.800303  711506 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:47:07.800397  711506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:47:07.890264  711506 cri.go:89] found id: "d510c6b2fe97869dbc63deaec59ea1bcc7e242180e112d0b933cc14d427eb0d7"
	I1206 09:47:07.890312  711506 cri.go:89] found id: "9084044e377dbea24c01a6f7b83522ce0cca64c749166b0ed4b1dd1fb9b67766"
	I1206 09:47:07.890318  711506 cri.go:89] found id: "5636af128e1d708b302dceab1c741afccd110a21e7b5be3cf8a0fabb030253ab"
	I1206 09:47:07.890324  711506 cri.go:89] found id: "8ca3492af015eb56f5ee9789e7ab502a9333afb2a9f9fdb6d27eee53e3d3671d"
	I1206 09:47:07.890328  711506 cri.go:89] found id: "1198f6d7ee136c379639a8be91363f8629cd112ef08874ca70d1e6b367cf027e"
	I1206 09:47:07.890333  711506 cri.go:89] found id: "d58bb44be0561901a4d07f8c1846c400377d8d00060e7d090693ad0092278bb3"
	I1206 09:47:07.890338  711506 cri.go:89] found id: "14a6067fda89cc7a9c2cfa81c10a8cf8fd8f7cc6f9b0ba4ecd8fe602f13f3c55"
	I1206 09:47:07.890343  711506 cri.go:89] found id: ""
	I1206 09:47:07.890416  711506 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:47:07.905429  711506 out.go:203] 
	W1206 09:47:07.906562  711506 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:47:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:47:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:47:07.906584  711506 out.go:285] * 
	* 
	W1206 09:47:07.913410  711506 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:47:07.915028  711506 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-137950 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-137950
helpers_test.go:243: (dbg) docker inspect pause-137950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683",
	        "Created": "2025-12-06T09:45:50.29704516Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 690781,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:45:50.338995667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/hostname",
	        "HostsPath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/hosts",
	        "LogPath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683-json.log",
	        "Name": "/pause-137950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-137950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-137950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683",
	                "LowerDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-137950",
	                "Source": "/var/lib/docker/volumes/pause-137950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-137950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-137950",
	                "name.minikube.sigs.k8s.io": "pause-137950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cf94ad5b619fd04a8bcdb11ca797abb0baa9cf0d539e41748093ab0ee3020d72",
	            "SandboxKey": "/var/run/docker/netns/cf94ad5b619f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-137950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "715a763b13ae7b8ce8af007cce1de5ca1a90d8a8bb5effa8e584fdcb8775bf08",
	                    "EndpointID": "5fe9d8dc14f979e0f6179872376c00a9f33f26df97b4349e4948097ab352998a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "36:49:1e:5d:14:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-137950",
	                        "709a4137ca3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-137950 -n pause-137950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-137950 -n pause-137950: exit status 2 (428.92441ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-137950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-137950 logs -n 25: (1.133407839s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-912259 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-912259       │ jenkins │ v1.37.0 │ 06 Dec 25 09:44 UTC │ 06 Dec 25 09:45 UTC │
	│ delete  │ -p scheduled-stop-912259                                                                                                                 │ scheduled-stop-912259       │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ start   │ -p insufficient-storage-941183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-941183 │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ delete  │ -p insufficient-storage-941183                                                                                                           │ insufficient-storage-941183 │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ start   │ -p force-systemd-env-168450 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-168450    │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p offline-crio-120041 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-120041         │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ start   │ -p pause-137950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-137950                │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ delete  │ -p force-systemd-env-168450                                                                                                              │ force-systemd-env-168450    │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p missing-upgrade-633386 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-633386      │ jenkins │ v1.35.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:47 UTC │
	│ delete  │ -p NoKubernetes-184706                                                                                                                   │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ delete  │ -p offline-crio-120041                                                                                                                   │ offline-crio-120041         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-581224   │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │                     │
	│ ssh     │ -p NoKubernetes-184706 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │                     │
	│ stop    │ -p NoKubernetes-184706                                                                                                                   │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p pause-137950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-137950                │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:47 UTC │
	│ ssh     │ -p NoKubernetes-184706 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	│ delete  │ -p NoKubernetes-184706                                                                                                                   │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │ 06 Dec 25 09:47 UTC │
	│ start   │ -p force-systemd-flag-996303 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-996303   │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	│ pause   │ -p pause-137950 --alsologtostderr -v=5                                                                                                   │ pause-137950                │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	│ start   │ -p missing-upgrade-633386 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-633386      │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:47:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:47:08.139221  712244 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:47:08.139366  712244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:47:08.139373  712244 out.go:374] Setting ErrFile to fd 2...
	I1206 09:47:08.139378  712244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:47:08.139704  712244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:47:08.140277  712244 out.go:368] Setting JSON to false
	I1206 09:47:08.141510  712244 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8972,"bootTime":1765005456,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:47:08.141608  712244 start.go:143] virtualization: kvm guest
	I1206 09:47:08.144605  712244 out.go:179] * [missing-upgrade-633386] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:47:08.145745  712244 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:47:08.145769  712244 notify.go:221] Checking for updates...
	I1206 09:47:08.147669  712244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:47:08.148912  712244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:47:08.149991  712244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:47:08.151111  712244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:47:08.155663  712244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:47:08.157550  712244 config.go:182] Loaded profile config "missing-upgrade-633386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:47:08.159417  712244 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1206 09:47:08.160540  712244 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:47:08.203571  712244 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:47:08.203692  712244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:47:08.290687  712244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-06 09:47:08.275773364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:47:08.290832  712244 docker.go:319] overlay module found
	I1206 09:47:08.293288  712244 out.go:179] * Using the docker driver based on existing profile
	I1206 09:47:08.294390  712244 start.go:309] selected driver: docker
	I1206 09:47:08.294409  712244 start.go:927] validating driver "docker" against &{Name:missing-upgrade-633386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-633386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:47:08.294528  712244 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:47:08.295371  712244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:47:08.385135  712244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-06 09:47:08.370219136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:47:08.385521  712244 cni.go:84] Creating CNI manager for ""
	I1206 09:47:08.385628  712244 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:47:08.385700  712244 start.go:353] cluster config:
	{Name:missing-upgrade-633386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-633386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:47:08.392442  712244 out.go:179] * Starting "missing-upgrade-633386" primary control-plane node in "missing-upgrade-633386" cluster
	I1206 09:47:08.394630  712244 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:47:08.395774  712244 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:47:08.397260  712244 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1206 09:47:08.397301  712244 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:47:08.397312  712244 cache.go:65] Caching tarball of preloaded images
	I1206 09:47:08.397344  712244 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1206 09:47:08.397407  712244 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:47:08.397419  712244 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1206 09:47:08.397563  712244 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/missing-upgrade-633386/config.json ...
	I1206 09:47:08.430764  712244 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1206 09:47:08.430794  712244 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1206 09:47:08.430810  712244 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:47:08.430845  712244 start.go:360] acquireMachinesLock for missing-upgrade-633386: {Name:mk7a94cc4969e087661a4fac98f879f8a144660d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:47:08.430918  712244 start.go:364] duration metric: took 38.764µs to acquireMachinesLock for "missing-upgrade-633386"
	I1206 09:47:08.430938  712244 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:47:08.430944  712244 fix.go:54] fixHost starting: 
	I1206 09:47:08.431226  712244 cli_runner.go:164] Run: docker container inspect missing-upgrade-633386 --format={{.State.Status}}
	W1206 09:47:08.456738  712244 cli_runner.go:211] docker container inspect missing-upgrade-633386 --format={{.State.Status}} returned with exit code 1
	I1206 09:47:08.456870  712244 fix.go:112] recreateIfNeeded on missing-upgrade-633386: state= err=unknown state "missing-upgrade-633386": docker container inspect missing-upgrade-633386 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-633386
	I1206 09:47:08.456959  712244 fix.go:117] machineExists: false. err=machine does not exist
	I1206 09:47:08.459419  712244 out.go:179] * docker "missing-upgrade-633386" container is missing, will recreate.
	
	
	==> CRI-O <==
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.25603123Z" level=info msg="RDT not available in the host system"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.256046673Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.257440424Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.257474157Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.257492032Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.258520014Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.258543303Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.263647122Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.263677529Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.266107804Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.266690152Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.266753702Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.36729931Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-p66ll Namespace:kube-system ID:5762bf912794358eb1635ebe887cfc57622c47ac8bf96e9960da478a30541bcf UID:ee28799a-68e2-4162-8627-a4218134d6bf NetNS:/var/run/netns/335db097-92e2-437c-ac7d-d26477f28fb9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008263e8}] Aliases:map[]}"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.36765222Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-p66ll for CNI network kindnet (type=ptp)"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368313946Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368354712Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368413299Z" level=info msg="Create NRI interface"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368578244Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368593772Z" level=info msg="runtime interface created"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368607228Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368615359Z" level=info msg="runtime interface starting up..."
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368622889Z" level=info msg="starting plugins..."
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368641861Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.369068621Z" level=info msg="No systemd watchdog enabled"
	Dec 06 09:47:02 pause-137950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d510c6b2fe978       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago       Running             coredns                   0                   5762bf9127943       coredns-66bc5c9577-p66ll               kube-system
	9084044e377db       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   53 seconds ago       Running             kube-proxy                0                   3805115e6a861       kube-proxy-nph7w                       kube-system
	5636af128e1d7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   53 seconds ago       Running             kindnet-cni               0                   e7a0674c93d3f       kindnet-wlg57                          kube-system
	8ca3492af015e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   7d4996a0de138       kube-controller-manager-pause-137950   kube-system
	1198f6d7ee136       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   85a3a02cae790       kube-scheduler-pause-137950            kube-system
	d58bb44be0561       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   3ccf5d4a49817       etcd-pause-137950                      kube-system
	14a6067fda89c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   9881c1a37eea8       kube-apiserver-pause-137950            kube-system
	
	
	==> coredns [d510c6b2fe97869dbc63deaec59ea1bcc7e242180e112d0b933cc14d427eb0d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56242 - 28800 "HINFO IN 4076338263945232146.7940878662408451562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030980884s
	
	
	==> describe nodes <==
	Name:               pause-137950
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-137950
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=pause-137950
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_46_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:46:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-137950
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:47:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-137950
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                7787a7e8-4a31-45ec-a3dc-f1e692056431
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p66ll                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-pause-137950                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-wlg57                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-pause-137950             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-pause-137950    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-nph7w                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-137950             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 53s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node pause-137950 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node pause-137950 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node pause-137950 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node pause-137950 event: Registered Node pause-137950 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-137950 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [d58bb44be0561901a4d07f8c1846c400377d8d00060e7d090693ad0092278bb3] <==
	{"level":"warn","ts":"2025-12-06T09:46:05.973339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:05.983742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:05.992266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.006157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.018075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.031772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.043824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.057925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.086331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.086502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.096226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.110617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.124383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.138311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.160441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.166905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.185496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.208682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.233465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.239891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.261198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.279522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.283666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.376927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:46:49.029767Z","caller":"traceutil/trace.go:172","msg":"trace[853622130] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"123.304228ms","start":"2025-12-06T09:46:48.906447Z","end":"2025-12-06T09:46:49.029752Z","steps":["trace[853622130] 'process raft request'  (duration: 121.822501ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:47:09 up  2:29,  0 user,  load average: 4.35, 1.87, 3.19
	Linux pause-137950 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5636af128e1d708b302dceab1c741afccd110a21e7b5be3cf8a0fabb030253ab] <==
	I1206 09:46:15.647147       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:46:15.648330       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:46:15.648679       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:46:15.648703       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:46:15.648727       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:46:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:46:15.943117       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:46:15.943163       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:46:15.943178       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:46:15.943404       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 09:46:45.943842       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 09:46:45.943862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 09:46:45.943872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 09:46:45.943875       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1206 09:46:47.444210       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:46:47.444241       1 metrics.go:72] Registering metrics
	I1206 09:46:47.444340       1 controller.go:711] "Syncing nftables rules"
	I1206 09:46:55.950551       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:46:55.950623       1 main.go:301] handling current node
	I1206 09:47:05.943272       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:47:05.943305       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14a6067fda89cc7a9c2cfa81c10a8cf8fd8f7cc6f9b0ba4ecd8fe602f13f3c55] <==
	I1206 09:46:07.334686       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:46:07.334696       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:46:07.346523       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:46:07.354572       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1206 09:46:07.354725       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:07.381737       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:07.381925       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:46:07.511446       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:46:08.118426       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:46:08.122848       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:46:08.122872       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:46:08.653728       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:46:08.695604       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:46:08.825913       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:46:08.831913       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1206 09:46:08.833058       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:46:08.837691       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:46:09.149586       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:46:10.005890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:46:10.017783       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:46:10.029476       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:46:14.853774       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:14.859868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:15.051491       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:46:15.102570       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8ca3492af015eb56f5ee9789e7ab502a9333afb2a9f9fdb6d27eee53e3d3671d] <==
	I1206 09:46:14.148237       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:46:14.148249       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:46:14.148257       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:46:14.148437       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:46:14.148568       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:46:14.148663       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:46:14.149671       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:46:14.149772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:46:14.149770       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:46:14.152635       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:46:14.152701       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:46:14.152751       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:46:14.152762       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:46:14.152769       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:46:14.155239       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:46:14.156313       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:46:14.156421       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:46:14.156532       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:46:14.156644       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-137950"
	I1206 09:46:14.156709       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1206 09:46:14.159077       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-137950" podCIDRs=["10.244.0.0/24"]
	I1206 09:46:14.161704       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:46:14.167160       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:46:14.172381       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:46:59.163720       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9084044e377dbea24c01a6f7b83522ce0cca64c749166b0ed4b1dd1fb9b67766] <==
	I1206 09:46:15.581699       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:46:15.675217       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:46:15.776306       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:46:15.776375       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 09:46:15.776515       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:46:15.802659       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:46:15.802720       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:46:15.808512       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:46:15.808987       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:46:15.809017       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:46:15.811094       1 config.go:200] "Starting service config controller"
	I1206 09:46:15.811817       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:46:15.811399       1 config.go:309] "Starting node config controller"
	I1206 09:46:15.811555       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:46:15.811909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:46:15.811922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:46:15.811927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:46:15.811581       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:46:15.811938       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:46:15.912090       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:46:15.912091       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:46:15.912139       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1198f6d7ee136c379639a8be91363f8629cd112ef08874ca70d1e6b367cf027e] <==
	E1206 09:46:07.270047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:46:07.270250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:46:07.269849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:46:07.270607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:46:07.270749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:46:07.270896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:46:07.270927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:46:07.271030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:46:07.271153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:46:07.271154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:46:07.271221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:46:07.271255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:46:07.271309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:46:07.271363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:46:07.271365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:46:07.272890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:46:07.272914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:46:08.100722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:46:08.221579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:46:08.237666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:46:08.284681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:46:08.406249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:46:08.439523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:46:08.447856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:46:10.364167       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:46:10 pause-137950 kubelet[1309]: I1206 09:46:10.981230    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-137950" podStartSLOduration=2.981199016 podStartE2EDuration="2.981199016s" podCreationTimestamp="2025-12-06 09:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:10.965702984 +0000 UTC m=+1.185397204" watchObservedRunningTime="2025-12-06 09:46:10.981199016 +0000 UTC m=+1.200893234"
	Dec 06 09:46:11 pause-137950 kubelet[1309]: I1206 09:46:11.014736    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-137950" podStartSLOduration=1.014712695 podStartE2EDuration="1.014712695s" podCreationTimestamp="2025-12-06 09:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:10.983423186 +0000 UTC m=+1.203117481" watchObservedRunningTime="2025-12-06 09:46:11.014712695 +0000 UTC m=+1.234406913"
	Dec 06 09:46:11 pause-137950 kubelet[1309]: I1206 09:46:11.014880    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-137950" podStartSLOduration=1.014874262 podStartE2EDuration="1.014874262s" podCreationTimestamp="2025-12-06 09:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:11.014353896 +0000 UTC m=+1.234048117" watchObservedRunningTime="2025-12-06 09:46:11.014874262 +0000 UTC m=+1.234568482"
	Dec 06 09:46:11 pause-137950 kubelet[1309]: I1206 09:46:11.033587    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-137950" podStartSLOduration=1.032752541 podStartE2EDuration="1.032752541s" podCreationTimestamp="2025-12-06 09:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:11.032438372 +0000 UTC m=+1.252132575" watchObservedRunningTime="2025-12-06 09:46:11.032752541 +0000 UTC m=+1.252446760"
	Dec 06 09:46:14 pause-137950 kubelet[1309]: I1206 09:46:14.246213    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:46:14 pause-137950 kubelet[1309]: I1206 09:46:14.247038    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220682    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-cni-cfg\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220736    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-lib-modules\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220766    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-xtables-lock\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220795    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e43f165f-f179-4468-96a4-e719b6dd0e33-kube-proxy\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220819    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e43f165f-f179-4468-96a4-e719b6dd0e33-xtables-lock\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220839    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e43f165f-f179-4468-96a4-e719b6dd0e33-lib-modules\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220866    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrch\" (UniqueName: \"kubernetes.io/projected/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-kube-api-access-pwrch\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220890    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfkn9\" (UniqueName: \"kubernetes.io/projected/e43f165f-f179-4468-96a4-e719b6dd0e33-kube-api-access-lfkn9\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.970938    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nph7w" podStartSLOduration=0.970913609 podStartE2EDuration="970.913609ms" podCreationTimestamp="2025-12-06 09:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:15.961577435 +0000 UTC m=+6.181271652" watchObservedRunningTime="2025-12-06 09:46:15.970913609 +0000 UTC m=+6.190607822"
	Dec 06 09:46:17 pause-137950 kubelet[1309]: I1206 09:46:17.173332    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wlg57" podStartSLOduration=2.17330917 podStartE2EDuration="2.17330917s" podCreationTimestamp="2025-12-06 09:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:15.97114907 +0000 UTC m=+6.190843288" watchObservedRunningTime="2025-12-06 09:46:17.17330917 +0000 UTC m=+7.393003391"
	Dec 06 09:46:56 pause-137950 kubelet[1309]: I1206 09:46:56.188702    1309 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:46:56 pause-137950 kubelet[1309]: I1206 09:46:56.319912    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g95r\" (UniqueName: \"kubernetes.io/projected/ee28799a-68e2-4162-8627-a4218134d6bf-kube-api-access-2g95r\") pod \"coredns-66bc5c9577-p66ll\" (UID: \"ee28799a-68e2-4162-8627-a4218134d6bf\") " pod="kube-system/coredns-66bc5c9577-p66ll"
	Dec 06 09:46:56 pause-137950 kubelet[1309]: I1206 09:46:56.319966    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee28799a-68e2-4162-8627-a4218134d6bf-config-volume\") pod \"coredns-66bc5c9577-p66ll\" (UID: \"ee28799a-68e2-4162-8627-a4218134d6bf\") " pod="kube-system/coredns-66bc5c9577-p66ll"
	Dec 06 09:46:57 pause-137950 kubelet[1309]: I1206 09:46:57.056764    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p66ll" podStartSLOduration=42.056742756 podStartE2EDuration="42.056742756s" podCreationTimestamp="2025-12-06 09:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:57.056499473 +0000 UTC m=+47.276193694" watchObservedRunningTime="2025-12-06 09:46:57.056742756 +0000 UTC m=+47.276436976"
	Dec 06 09:47:06 pause-137950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:47:06 pause-137950 kubelet[1309]: I1206 09:47:06.531554    1309 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:47:06 pause-137950 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:47:06 pause-137950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:47:06 pause-137950 systemd[1]: kubelet.service: Consumed 2.403s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-137950 -n pause-137950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-137950 -n pause-137950: exit status 2 (330.49541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-137950 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-137950
helpers_test.go:243: (dbg) docker inspect pause-137950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683",
	        "Created": "2025-12-06T09:45:50.29704516Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 690781,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:45:50.338995667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/hostname",
	        "HostsPath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/hosts",
	        "LogPath": "/var/lib/docker/containers/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683/709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683-json.log",
	        "Name": "/pause-137950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-137950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-137950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "709a4137ca3b297a8f06cb363e57ed39fa630c469f0a91b871aa90f223934683",
	                "LowerDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e33b63f5efb4c49f24ea116fbc4abc28569765106b8a185b61a632c633fbd0c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-137950",
	                "Source": "/var/lib/docker/volumes/pause-137950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-137950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-137950",
	                "name.minikube.sigs.k8s.io": "pause-137950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cf94ad5b619fd04a8bcdb11ca797abb0baa9cf0d539e41748093ab0ee3020d72",
	            "SandboxKey": "/var/run/docker/netns/cf94ad5b619f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-137950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "715a763b13ae7b8ce8af007cce1de5ca1a90d8a8bb5effa8e584fdcb8775bf08",
	                    "EndpointID": "5fe9d8dc14f979e0f6179872376c00a9f33f26df97b4349e4948097ab352998a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "36:49:1e:5d:14:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-137950",
	                        "709a4137ca3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-137950 -n pause-137950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-137950 -n pause-137950: exit status 2 (321.485724ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-137950 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p scheduled-stop-912259                                                                                                                 │ scheduled-stop-912259       │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ start   │ -p insufficient-storage-941183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-941183 │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ delete  │ -p insufficient-storage-941183                                                                                                           │ insufficient-storage-941183 │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ start   │ -p force-systemd-env-168450 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-168450    │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p offline-crio-120041 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-120041         │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ start   │ -p pause-137950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-137950                │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ delete  │ -p force-systemd-env-168450                                                                                                              │ force-systemd-env-168450    │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p missing-upgrade-633386 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-633386      │ jenkins │ v1.35.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:47 UTC │
	│ delete  │ -p NoKubernetes-184706                                                                                                                   │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ delete  │ -p offline-crio-120041                                                                                                                   │ offline-crio-120041         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-581224   │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:47 UTC │
	│ ssh     │ -p NoKubernetes-184706 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │                     │
	│ stop    │ -p NoKubernetes-184706                                                                                                                   │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p NoKubernetes-184706 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p pause-137950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-137950                │ jenkins │ v1.37.0 │ 06 Dec 25 09:46 UTC │ 06 Dec 25 09:47 UTC │
	│ ssh     │ -p NoKubernetes-184706 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	│ delete  │ -p NoKubernetes-184706                                                                                                                   │ NoKubernetes-184706         │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │ 06 Dec 25 09:47 UTC │
	│ start   │ -p force-systemd-flag-996303 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-996303   │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	│ pause   │ -p pause-137950 --alsologtostderr -v=5                                                                                                   │ pause-137950                │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	│ start   │ -p missing-upgrade-633386 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-633386      │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-581224                                                                                                             │ kubernetes-upgrade-581224   │ jenkins │ v1.37.0 │ 06 Dec 25 09:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:47:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:47:08.139221  712244 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:47:08.139366  712244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:47:08.139373  712244 out.go:374] Setting ErrFile to fd 2...
	I1206 09:47:08.139378  712244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:47:08.139704  712244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:47:08.140277  712244 out.go:368] Setting JSON to false
	I1206 09:47:08.141510  712244 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8972,"bootTime":1765005456,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:47:08.141608  712244 start.go:143] virtualization: kvm guest
	I1206 09:47:08.144605  712244 out.go:179] * [missing-upgrade-633386] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:47:08.145745  712244 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:47:08.145769  712244 notify.go:221] Checking for updates...
	I1206 09:47:08.147669  712244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:47:08.148912  712244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:47:08.149991  712244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:47:08.151111  712244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:47:08.155663  712244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:47:08.157550  712244 config.go:182] Loaded profile config "missing-upgrade-633386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:47:08.159417  712244 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1206 09:47:08.160540  712244 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:47:08.203571  712244 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:47:08.203692  712244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:47:08.290687  712244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-06 09:47:08.275773364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:47:08.290832  712244 docker.go:319] overlay module found
	I1206 09:47:08.293288  712244 out.go:179] * Using the docker driver based on existing profile
	I1206 09:47:08.294390  712244 start.go:309] selected driver: docker
	I1206 09:47:08.294409  712244 start.go:927] validating driver "docker" against &{Name:missing-upgrade-633386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-633386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:47:08.294528  712244 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:47:08.295371  712244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:47:08.385135  712244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-06 09:47:08.370219136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:47:08.385521  712244 cni.go:84] Creating CNI manager for ""
	I1206 09:47:08.385628  712244 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:47:08.385700  712244 start.go:353] cluster config:
	{Name:missing-upgrade-633386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-633386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:47:08.392442  712244 out.go:179] * Starting "missing-upgrade-633386" primary control-plane node in "missing-upgrade-633386" cluster
	I1206 09:47:08.394630  712244 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:47:08.395774  712244 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:47:08.397260  712244 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1206 09:47:08.397301  712244 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:47:08.397312  712244 cache.go:65] Caching tarball of preloaded images
	I1206 09:47:08.397344  712244 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1206 09:47:08.397407  712244 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:47:08.397419  712244 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1206 09:47:08.397563  712244 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/missing-upgrade-633386/config.json ...
	I1206 09:47:08.430764  712244 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1206 09:47:08.430794  712244 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1206 09:47:08.430810  712244 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:47:08.430845  712244 start.go:360] acquireMachinesLock for missing-upgrade-633386: {Name:mk7a94cc4969e087661a4fac98f879f8a144660d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:47:08.430918  712244 start.go:364] duration metric: took 38.764µs to acquireMachinesLock for "missing-upgrade-633386"
	I1206 09:47:08.430938  712244 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:47:08.430944  712244 fix.go:54] fixHost starting: 
	I1206 09:47:08.431226  712244 cli_runner.go:164] Run: docker container inspect missing-upgrade-633386 --format={{.State.Status}}
	W1206 09:47:08.456738  712244 cli_runner.go:211] docker container inspect missing-upgrade-633386 --format={{.State.Status}} returned with exit code 1
	I1206 09:47:08.456870  712244 fix.go:112] recreateIfNeeded on missing-upgrade-633386: state= err=unknown state "missing-upgrade-633386": docker container inspect missing-upgrade-633386 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-633386
	I1206 09:47:08.456959  712244 fix.go:117] machineExists: false. err=machine does not exist
	I1206 09:47:08.459419  712244 out.go:179] * docker "missing-upgrade-633386" container is missing, will recreate.
	I1206 09:47:07.604890  701632 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:47:07.611741  701632 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1206 09:47:07.611766  701632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:47:07.628584  701632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:47:08.585181  701632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:47:08.585352  701632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:47:08.585501  701632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-581224 minikube.k8s.io/updated_at=2025_12_06T09_47_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=kubernetes-upgrade-581224 minikube.k8s.io/primary=true
	I1206 09:47:08.607974  701632 ops.go:34] apiserver oom_adj: -16
	I1206 09:47:08.700785  701632 kubeadm.go:1114] duration metric: took 115.478976ms to wait for elevateKubeSystemPrivileges
	I1206 09:47:08.706281  701632 kubeadm.go:403] duration metric: took 11.97806814s to StartCluster
	I1206 09:47:08.706317  701632 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:47:08.706400  701632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:47:08.707845  701632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:47:08.708077  701632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:47:08.708097  701632 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:47:08.708411  701632 config.go:182] Loaded profile config "kubernetes-upgrade-581224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:47:08.708369  701632 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:47:08.708529  701632 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-581224"
	I1206 09:47:08.708579  701632 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-581224"
	I1206 09:47:08.708611  701632 host.go:66] Checking if "kubernetes-upgrade-581224" exists ...
	I1206 09:47:08.708961  701632 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-581224"
	I1206 09:47:08.708986  701632 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-581224"
	I1206 09:47:08.709139  701632 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-581224 --format={{.State.Status}}
	I1206 09:47:08.709233  701632 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-581224 --format={{.State.Status}}
	I1206 09:47:08.710929  701632 out.go:179] * Verifying Kubernetes components...
	I1206 09:47:08.711951  701632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:47:08.731923  701632 kapi.go:59] client config for kubernetes-upgrade-581224: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.key", CAFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:47:08.732512  701632 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 09:47:08.732537  701632 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 09:47:08.732544  701632 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 09:47:08.732554  701632 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 09:47:08.732560  701632 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 09:47:08.732951  701632 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-581224"
	I1206 09:47:08.732993  701632 host.go:66] Checking if "kubernetes-upgrade-581224" exists ...
	I1206 09:47:08.733416  701632 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-581224 --format={{.State.Status}}
	I1206 09:47:08.733549  701632 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:47:08.734834  701632 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:47:08.734854  701632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:47:08.734904  701632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:47:08.763533  701632 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:47:08.763691  701632 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:47:08.763924  701632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:47:08.769484  701632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kubernetes-upgrade-581224/id_rsa Username:docker}
	I1206 09:47:08.789240  701632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kubernetes-upgrade-581224/id_rsa Username:docker}
	I1206 09:47:08.814252  701632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:47:08.863162  701632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:47:08.891142  701632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:47:08.909904  701632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:47:09.076590  701632 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1206 09:47:09.077647  701632 kapi.go:59] client config for kubernetes-upgrade-581224: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.key", CAFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:47:09.077611  701632 kapi.go:59] client config for kubernetes-upgrade-581224: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.key", CAFile:"/home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:47:09.078037  701632 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:47:09.078091  701632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:47:09.272923  701632 api_server.go:72] duration metric: took 564.788684ms to wait for apiserver process to appear ...
	I1206 09:47:09.272948  701632 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:47:09.272972  701632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:47:09.277860  701632 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1206 09:47:09.278964  701632 api_server.go:141] control plane version: v1.28.0
	I1206 09:47:09.278993  701632 api_server.go:131] duration metric: took 6.03691ms to wait for apiserver health ...
	I1206 09:47:09.279002  701632 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:47:09.280292  701632 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:47:09.281854  701632 addons.go:530] duration metric: took 573.483781ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:47:09.282075  701632 system_pods.go:59] 5 kube-system pods found
	I1206 09:47:09.282100  701632 system_pods.go:61] "etcd-kubernetes-upgrade-581224" [d6b042d4-8ec7-4604-b288-89567df61662] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:47:09.282107  701632 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-581224" [47b4fa2f-9b65-4a49-8b76-b41cca8e23e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:47:09.282118  701632 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-581224" [bcfb3b8f-ed53-41af-af30-a633ae54f657] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:47:09.282123  701632 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-581224" [98a7f015-9eb4-4c05-9e86-64b5d88b64c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:47:09.282130  701632 system_pods.go:61] "storage-provisioner" [d33827c7-6661-4ed5-a532-91e1c4402b04] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1206 09:47:09.282138  701632 system_pods.go:74] duration metric: took 3.131274ms to wait for pod list to return data ...
	I1206 09:47:09.282148  701632 kubeadm.go:587] duration metric: took 574.021182ms to wait for: map[apiserver:true system_pods:true]
	I1206 09:47:09.282162  701632 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:47:09.284121  701632 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:47:09.284148  701632 node_conditions.go:123] node cpu capacity is 8
	I1206 09:47:09.284167  701632 node_conditions.go:105] duration metric: took 1.999687ms to run NodePressure ...
	I1206 09:47:09.284185  701632 start.go:242] waiting for startup goroutines ...
	I1206 09:47:09.581190  701632 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-581224" context rescaled to 1 replicas
	I1206 09:47:09.581243  701632 start.go:247] waiting for cluster config update ...
	I1206 09:47:09.581259  701632 start.go:256] writing updated cluster config ...
	I1206 09:47:09.581599  701632 ssh_runner.go:195] Run: rm -f paused
	I1206 09:47:09.638530  701632 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1206 09:47:09.640311  701632 out.go:203] 
	W1206 09:47:09.641488  701632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1206 09:47:09.642543  701632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1206 09:47:09.644103  701632 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-581224" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.25603123Z" level=info msg="RDT not available in the host system"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.256046673Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.257440424Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.257474157Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.257492032Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.258520014Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.258543303Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.263647122Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.263677529Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.266107804Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.266690152Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.266753702Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.36729931Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-p66ll Namespace:kube-system ID:5762bf912794358eb1635ebe887cfc57622c47ac8bf96e9960da478a30541bcf UID:ee28799a-68e2-4162-8627-a4218134d6bf NetNS:/var/run/netns/335db097-92e2-437c-ac7d-d26477f28fb9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008263e8}] Aliases:map[]}"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.36765222Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-p66ll for CNI network kindnet (type=ptp)"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368313946Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368354712Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368413299Z" level=info msg="Create NRI interface"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368578244Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368593772Z" level=info msg="runtime interface created"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368607228Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368615359Z" level=info msg="runtime interface starting up..."
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368622889Z" level=info msg="starting plugins..."
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.368641861Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 09:47:02 pause-137950 crio[2193]: time="2025-12-06T09:47:02.369068621Z" level=info msg="No systemd watchdog enabled"
	Dec 06 09:47:02 pause-137950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d510c6b2fe978       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago       Running             coredns                   0                   5762bf9127943       coredns-66bc5c9577-p66ll               kube-system
	9084044e377db       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   55 seconds ago       Running             kube-proxy                0                   3805115e6a861       kube-proxy-nph7w                       kube-system
	5636af128e1d7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   55 seconds ago       Running             kindnet-cni               0                   e7a0674c93d3f       kindnet-wlg57                          kube-system
	8ca3492af015e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   7d4996a0de138       kube-controller-manager-pause-137950   kube-system
	1198f6d7ee136       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   85a3a02cae790       kube-scheduler-pause-137950            kube-system
	d58bb44be0561       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   3ccf5d4a49817       etcd-pause-137950                      kube-system
	14a6067fda89c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   9881c1a37eea8       kube-apiserver-pause-137950            kube-system
	
	
	==> coredns [d510c6b2fe97869dbc63deaec59ea1bcc7e242180e112d0b933cc14d427eb0d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56242 - 28800 "HINFO IN 4076338263945232146.7940878662408451562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030980884s
	
	
	==> describe nodes <==
	Name:               pause-137950
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-137950
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=pause-137950
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_46_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:46:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-137950
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:47:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:46:56 +0000   Sat, 06 Dec 2025 09:46:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-137950
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                7787a7e8-4a31-45ec-a3dc-f1e692056431
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p66ll                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-pause-137950                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-wlg57                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-pause-137950             250m (3%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-pause-137950    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-nph7w                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-137950             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 55s   kube-proxy       
	  Normal  Starting                 61s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s   kubelet          Node pause-137950 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s   kubelet          Node pause-137950 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s   kubelet          Node pause-137950 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s   node-controller  Node pause-137950 event: Registered Node pause-137950 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-137950 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [d58bb44be0561901a4d07f8c1846c400377d8d00060e7d090693ad0092278bb3] <==
	{"level":"warn","ts":"2025-12-06T09:46:05.973339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:05.983742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:05.992266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.006157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.018075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.031772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.043824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.057925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.086331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.086502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.096226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.110617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.124383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.138311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.160441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.166905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.185496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.208682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.233465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.239891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.261198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.279522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.283666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:46:06.376927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:46:49.029767Z","caller":"traceutil/trace.go:172","msg":"trace[853622130] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"123.304228ms","start":"2025-12-06T09:46:48.906447Z","end":"2025-12-06T09:46:49.029752Z","steps":["trace[853622130] 'process raft request'  (duration: 121.822501ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:47:10 up  2:29,  0 user,  load average: 4.35, 1.87, 3.19
	Linux pause-137950 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5636af128e1d708b302dceab1c741afccd110a21e7b5be3cf8a0fabb030253ab] <==
	I1206 09:46:15.647147       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:46:15.648330       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:46:15.648679       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:46:15.648703       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:46:15.648727       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:46:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:46:15.943117       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:46:15.943163       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:46:15.943178       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:46:15.943404       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 09:46:45.943842       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 09:46:45.943862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 09:46:45.943872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 09:46:45.943875       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1206 09:46:47.444210       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:46:47.444241       1 metrics.go:72] Registering metrics
	I1206 09:46:47.444340       1 controller.go:711] "Syncing nftables rules"
	I1206 09:46:55.950551       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:46:55.950623       1 main.go:301] handling current node
	I1206 09:47:05.943272       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:47:05.943305       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14a6067fda89cc7a9c2cfa81c10a8cf8fd8f7cc6f9b0ba4ecd8fe602f13f3c55] <==
	I1206 09:46:07.334686       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:46:07.334696       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:46:07.346523       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:46:07.354572       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1206 09:46:07.354725       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:07.381737       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:07.381925       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:46:07.511446       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:46:08.118426       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:46:08.122848       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:46:08.122872       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:46:08.653728       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:46:08.695604       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:46:08.825913       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:46:08.831913       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1206 09:46:08.833058       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:46:08.837691       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:46:09.149586       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:46:10.005890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:46:10.017783       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:46:10.029476       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:46:14.853774       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:14.859868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:46:15.051491       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:46:15.102570       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8ca3492af015eb56f5ee9789e7ab502a9333afb2a9f9fdb6d27eee53e3d3671d] <==
	I1206 09:46:14.148237       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:46:14.148249       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:46:14.148257       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:46:14.148437       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:46:14.148568       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:46:14.148663       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:46:14.149671       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:46:14.149772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:46:14.149770       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:46:14.152635       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:46:14.152701       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:46:14.152751       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:46:14.152762       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:46:14.152769       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:46:14.155239       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:46:14.156313       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:46:14.156421       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:46:14.156532       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:46:14.156644       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-137950"
	I1206 09:46:14.156709       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1206 09:46:14.159077       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-137950" podCIDRs=["10.244.0.0/24"]
	I1206 09:46:14.161704       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:46:14.167160       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:46:14.172381       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:46:59.163720       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9084044e377dbea24c01a6f7b83522ce0cca64c749166b0ed4b1dd1fb9b67766] <==
	I1206 09:46:15.581699       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:46:15.675217       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:46:15.776306       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:46:15.776375       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 09:46:15.776515       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:46:15.802659       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:46:15.802720       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:46:15.808512       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:46:15.808987       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:46:15.809017       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:46:15.811094       1 config.go:200] "Starting service config controller"
	I1206 09:46:15.811817       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:46:15.811399       1 config.go:309] "Starting node config controller"
	I1206 09:46:15.811555       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:46:15.811909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:46:15.811922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:46:15.811927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:46:15.811581       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:46:15.811938       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:46:15.912090       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:46:15.912091       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:46:15.912139       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1198f6d7ee136c379639a8be91363f8629cd112ef08874ca70d1e6b367cf027e] <==
	E1206 09:46:07.270047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:46:07.270250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:46:07.269849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:46:07.270607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:46:07.270749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:46:07.270896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:46:07.270927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:46:07.271030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:46:07.271153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:46:07.271154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:46:07.271221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:46:07.271255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:46:07.271309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:46:07.271363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:46:07.271365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:46:07.272890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:46:07.272914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:46:08.100722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:46:08.221579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:46:08.237666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:46:08.284681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:46:08.406249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:46:08.439523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:46:08.447856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:46:10.364167       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:46:10 pause-137950 kubelet[1309]: I1206 09:46:10.981230    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-137950" podStartSLOduration=2.981199016 podStartE2EDuration="2.981199016s" podCreationTimestamp="2025-12-06 09:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:10.965702984 +0000 UTC m=+1.185397204" watchObservedRunningTime="2025-12-06 09:46:10.981199016 +0000 UTC m=+1.200893234"
	Dec 06 09:46:11 pause-137950 kubelet[1309]: I1206 09:46:11.014736    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-137950" podStartSLOduration=1.014712695 podStartE2EDuration="1.014712695s" podCreationTimestamp="2025-12-06 09:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:10.983423186 +0000 UTC m=+1.203117481" watchObservedRunningTime="2025-12-06 09:46:11.014712695 +0000 UTC m=+1.234406913"
	Dec 06 09:46:11 pause-137950 kubelet[1309]: I1206 09:46:11.014880    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-137950" podStartSLOduration=1.014874262 podStartE2EDuration="1.014874262s" podCreationTimestamp="2025-12-06 09:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:11.014353896 +0000 UTC m=+1.234048117" watchObservedRunningTime="2025-12-06 09:46:11.014874262 +0000 UTC m=+1.234568482"
	Dec 06 09:46:11 pause-137950 kubelet[1309]: I1206 09:46:11.033587    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-137950" podStartSLOduration=1.032752541 podStartE2EDuration="1.032752541s" podCreationTimestamp="2025-12-06 09:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:11.032438372 +0000 UTC m=+1.252132575" watchObservedRunningTime="2025-12-06 09:46:11.032752541 +0000 UTC m=+1.252446760"
	Dec 06 09:46:14 pause-137950 kubelet[1309]: I1206 09:46:14.246213    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:46:14 pause-137950 kubelet[1309]: I1206 09:46:14.247038    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220682    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-cni-cfg\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220736    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-lib-modules\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220766    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-xtables-lock\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220795    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e43f165f-f179-4468-96a4-e719b6dd0e33-kube-proxy\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220819    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e43f165f-f179-4468-96a4-e719b6dd0e33-xtables-lock\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220839    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e43f165f-f179-4468-96a4-e719b6dd0e33-lib-modules\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220866    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrch\" (UniqueName: \"kubernetes.io/projected/c43f9cb3-fd04-4ec0-a0be-39fe5319751e-kube-api-access-pwrch\") pod \"kindnet-wlg57\" (UID: \"c43f9cb3-fd04-4ec0-a0be-39fe5319751e\") " pod="kube-system/kindnet-wlg57"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.220890    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfkn9\" (UniqueName: \"kubernetes.io/projected/e43f165f-f179-4468-96a4-e719b6dd0e33-kube-api-access-lfkn9\") pod \"kube-proxy-nph7w\" (UID: \"e43f165f-f179-4468-96a4-e719b6dd0e33\") " pod="kube-system/kube-proxy-nph7w"
	Dec 06 09:46:15 pause-137950 kubelet[1309]: I1206 09:46:15.970938    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nph7w" podStartSLOduration=0.970913609 podStartE2EDuration="970.913609ms" podCreationTimestamp="2025-12-06 09:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:15.961577435 +0000 UTC m=+6.181271652" watchObservedRunningTime="2025-12-06 09:46:15.970913609 +0000 UTC m=+6.190607822"
	Dec 06 09:46:17 pause-137950 kubelet[1309]: I1206 09:46:17.173332    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wlg57" podStartSLOduration=2.17330917 podStartE2EDuration="2.17330917s" podCreationTimestamp="2025-12-06 09:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:15.97114907 +0000 UTC m=+6.190843288" watchObservedRunningTime="2025-12-06 09:46:17.17330917 +0000 UTC m=+7.393003391"
	Dec 06 09:46:56 pause-137950 kubelet[1309]: I1206 09:46:56.188702    1309 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:46:56 pause-137950 kubelet[1309]: I1206 09:46:56.319912    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g95r\" (UniqueName: \"kubernetes.io/projected/ee28799a-68e2-4162-8627-a4218134d6bf-kube-api-access-2g95r\") pod \"coredns-66bc5c9577-p66ll\" (UID: \"ee28799a-68e2-4162-8627-a4218134d6bf\") " pod="kube-system/coredns-66bc5c9577-p66ll"
	Dec 06 09:46:56 pause-137950 kubelet[1309]: I1206 09:46:56.319966    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee28799a-68e2-4162-8627-a4218134d6bf-config-volume\") pod \"coredns-66bc5c9577-p66ll\" (UID: \"ee28799a-68e2-4162-8627-a4218134d6bf\") " pod="kube-system/coredns-66bc5c9577-p66ll"
	Dec 06 09:46:57 pause-137950 kubelet[1309]: I1206 09:46:57.056764    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p66ll" podStartSLOduration=42.056742756 podStartE2EDuration="42.056742756s" podCreationTimestamp="2025-12-06 09:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:46:57.056499473 +0000 UTC m=+47.276193694" watchObservedRunningTime="2025-12-06 09:46:57.056742756 +0000 UTC m=+47.276436976"
	Dec 06 09:47:06 pause-137950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:47:06 pause-137950 kubelet[1309]: I1206 09:47:06.531554    1309 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:47:06 pause-137950 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:47:06 pause-137950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:47:06 pause-137950 systemd[1]: kubelet.service: Consumed 2.403s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-137950 -n pause-137950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-137950 -n pause-137950: exit status 2 (338.428341ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-137950 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.070845ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:50:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-507108 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-507108 describe deploy/metrics-server -n kube-system: exit status 1 (67.821782ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-507108 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-507108
helpers_test.go:243: (dbg) docker inspect old-k8s-version-507108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3",
	        "Created": "2025-12-06T09:49:19.254369634Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742596,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:49:19.293049441Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/hosts",
	        "LogPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3-json.log",
	        "Name": "/old-k8s-version-507108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-507108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-507108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3",
	                "LowerDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-507108",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-507108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-507108",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-507108",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-507108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0b9708d299cd30023e5ac44dcd1230db0727d3fd8d1dc3b6c7f23b1bc7c753d1",
	            "SandboxKey": "/var/run/docker/netns/0b9708d299cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-507108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "68b5b112ecd8d43eda4b45466a2546c01f5d267b315a697829fb79471d3e3a2b",
	                    "EndpointID": "2cdda1cc6dde426d9173ff758d5ae6800512271e9cb9acb06cfec1c3502a4622",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:f7:64:25:a3:48",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-507108",
	                        "e36525fbfc60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-507108 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-507108 logs -n 25: (1.043664302s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-983381 sudo systemctl cat kubelet --no-pager                                                                                                                                                                                        │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                         │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                        │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo docker system info                                                                                                                                                                                                      │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo containerd config dump                                                                                                                                                                                                  │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo crio config                                                                                                                                                                                                             │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ delete  │ -p cilium-983381                                                                                                                                                                                                                              │ cilium-983381          │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:49 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-507108 │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-507108 │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:49:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:49:13.508934  741534 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:49:13.509195  741534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:49:13.509205  741534 out.go:374] Setting ErrFile to fd 2...
	I1206 09:49:13.509210  741534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:49:13.509429  741534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:49:13.509909  741534 out.go:368] Setting JSON to false
	I1206 09:49:13.511055  741534 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9097,"bootTime":1765005456,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:49:13.511109  741534 start.go:143] virtualization: kvm guest
	I1206 09:49:13.512820  741534 out.go:179] * [old-k8s-version-507108] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:49:13.513859  741534 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:49:13.513894  741534 notify.go:221] Checking for updates...
	I1206 09:49:13.515723  741534 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:49:13.516885  741534 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:49:13.517811  741534 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:49:13.518753  741534 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:49:13.519700  741534 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:49:13.521037  741534 config.go:182] Loaded profile config "cert-expiration-669264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:49:13.521141  741534 config.go:182] Loaded profile config "kubernetes-upgrade-581224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:49:13.521237  741534 config.go:182] Loaded profile config "stopped-upgrade-031481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:49:13.521335  741534 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:49:13.545923  741534 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:49:13.546008  741534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:49:13.602062  741534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-06 09:49:13.592451559 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:49:13.602170  741534 docker.go:319] overlay module found
	I1206 09:49:13.603856  741534 out.go:179] * Using the docker driver based on user configuration
	I1206 09:49:13.604928  741534 start.go:309] selected driver: docker
	I1206 09:49:13.604940  741534 start.go:927] validating driver "docker" against <nil>
	I1206 09:49:13.604950  741534 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:49:13.605635  741534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:49:13.662376  741534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-06 09:49:13.652209763 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:49:13.662682  741534 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:49:13.663105  741534 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:49:13.664689  741534 out.go:179] * Using Docker driver with root privileges
	I1206 09:49:13.665585  741534 cni.go:84] Creating CNI manager for ""
	I1206 09:49:13.665647  741534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:49:13.665657  741534 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:49:13.665719  741534 start.go:353] cluster config:
	{Name:old-k8s-version-507108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-507108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:49:13.666877  741534 out.go:179] * Starting "old-k8s-version-507108" primary control-plane node in "old-k8s-version-507108" cluster
	I1206 09:49:13.667979  741534 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:49:13.668996  741534 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:49:13.669880  741534 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:49:13.669910  741534 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:49:13.669932  741534 cache.go:65] Caching tarball of preloaded images
	I1206 09:49:13.669979  741534 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:49:13.670034  741534 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:49:13.670051  741534 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1206 09:49:13.670167  741534 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/config.json ...
	I1206 09:49:13.670195  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/config.json: {Name:mk2166a2df38c0e1b3228369766adf2f5d87fee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:13.690971  741534 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:49:13.690990  741534 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:49:13.691019  741534 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:49:13.691055  741534 start.go:360] acquireMachinesLock for old-k8s-version-507108: {Name:mk7f605f22d1124ea10a63de891e32f092af6d13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:49:13.691160  741534 start.go:364] duration metric: took 88.151µs to acquireMachinesLock for "old-k8s-version-507108"
	I1206 09:49:13.691188  741534 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-507108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-507108 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:49:13.691276  741534 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:49:12.020196  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:13.846643  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:52498->192.168.76.2:8443: read: connection reset by peer
	I1206 09:49:13.846714  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:13.846763  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:13.886834  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:13.886864  725997 cri.go:89] found id: "709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4"
	I1206 09:49:13.886872  725997 cri.go:89] found id: ""
	I1206 09:49:13.886883  725997 logs.go:282] 2 containers: [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4]
	I1206 09:49:13.886989  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:13.891765  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:13.895515  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:13.895585  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:13.932385  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:13.932412  725997 cri.go:89] found id: ""
	I1206 09:49:13.932423  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:13.932513  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:13.936327  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:13.936383  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:13.983519  725997 cri.go:89] found id: ""
	I1206 09:49:13.983547  725997 logs.go:282] 0 containers: []
	W1206 09:49:13.983560  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:13.983569  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:13.983632  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:14.022302  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:14.022328  725997 cri.go:89] found id: ""
	I1206 09:49:14.022338  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:14.022405  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:14.026519  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:14.026588  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:14.064733  725997 cri.go:89] found id: ""
	I1206 09:49:14.064766  725997 logs.go:282] 0 containers: []
	W1206 09:49:14.064779  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:14.064788  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:14.064854  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:14.106199  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:14.106234  725997 cri.go:89] found id: "8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:14.106240  725997 cri.go:89] found id: ""
	I1206 09:49:14.106257  725997 logs.go:282] 2 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71]
	I1206 09:49:14.106320  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:14.112029  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:14.115674  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:14.115727  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:14.157428  725997 cri.go:89] found id: ""
	I1206 09:49:14.157451  725997 logs.go:282] 0 containers: []
	W1206 09:49:14.157472  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:14.157481  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:14.157540  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:14.195581  725997 cri.go:89] found id: ""
	I1206 09:49:14.195614  725997 logs.go:282] 0 containers: []
	W1206 09:49:14.195627  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:14.195639  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:14.195656  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:14.266485  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:14.266524  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:14.308048  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:14.308078  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:14.391553  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:14.391590  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:14.461071  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:14.461093  725997 logs.go:123] Gathering logs for kube-apiserver [709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4] ...
	I1206 09:49:14.461111  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4"
	W1206 09:49:14.496994  725997 logs.go:130] failed kube-apiserver [709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:49:14.494066    1631 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4\": container with ID starting with 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4 not found: ID does not exist" containerID="709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4"
	time="2025-12-06T09:49:14Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4\": container with ID starting with 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1206 09:49:14.494066    1631 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4\": container with ID starting with 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4 not found: ID does not exist" containerID="709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4"
	time="2025-12-06T09:49:14Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4\": container with ID starting with 709eb62dd395333998f79d029188d79bf9be18c43e09e1814f4cd9be71da40e4 not found: ID does not exist"
	
	** /stderr **
	I1206 09:49:14.497013  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:14.497026  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:14.538813  725997 logs.go:123] Gathering logs for kube-controller-manager [8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71] ...
	I1206 09:49:14.538849  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:14.575145  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:14.575178  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:14.615591  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:14.615640  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:14.661167  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:14.661199  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:14.681319  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:14.681350  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:13.692921  741534 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:49:13.693127  741534 start.go:159] libmachine.API.Create for "old-k8s-version-507108" (driver="docker")
	I1206 09:49:13.693158  741534 client.go:173] LocalClient.Create starting
	I1206 09:49:13.693221  741534 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:49:13.693252  741534 main.go:143] libmachine: Decoding PEM data...
	I1206 09:49:13.693272  741534 main.go:143] libmachine: Parsing certificate...
	I1206 09:49:13.693343  741534 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:49:13.693365  741534 main.go:143] libmachine: Decoding PEM data...
	I1206 09:49:13.693375  741534 main.go:143] libmachine: Parsing certificate...
	I1206 09:49:13.693745  741534 cli_runner.go:164] Run: docker network inspect old-k8s-version-507108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:49:13.710340  741534 cli_runner.go:211] docker network inspect old-k8s-version-507108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:49:13.710413  741534 network_create.go:284] running [docker network inspect old-k8s-version-507108] to gather additional debugging logs...
	I1206 09:49:13.710429  741534 cli_runner.go:164] Run: docker network inspect old-k8s-version-507108
	W1206 09:49:13.726782  741534 cli_runner.go:211] docker network inspect old-k8s-version-507108 returned with exit code 1
	I1206 09:49:13.726820  741534 network_create.go:287] error running [docker network inspect old-k8s-version-507108]: docker network inspect old-k8s-version-507108: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-507108 not found
	I1206 09:49:13.726833  741534 network_create.go:289] output of [docker network inspect old-k8s-version-507108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-507108 not found
	
	** /stderr **
	I1206 09:49:13.726988  741534 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:49:13.744873  741534 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:49:13.745587  741534 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:49:13.746318  741534 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:49:13.746847  741534 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ff25d0f3f317 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:b4:c0:5d:75:0d} reservation:<nil>}
	I1206 09:49:13.747704  741534 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df3ea0}
	I1206 09:49:13.747735  741534 network_create.go:124] attempt to create docker network old-k8s-version-507108 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:49:13.747786  741534 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-507108 old-k8s-version-507108
	I1206 09:49:13.799064  741534 network_create.go:108] docker network old-k8s-version-507108 192.168.85.0/24 created
	I1206 09:49:13.799100  741534 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-507108" container
	I1206 09:49:13.799161  741534 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:49:13.816923  741534 cli_runner.go:164] Run: docker volume create old-k8s-version-507108 --label name.minikube.sigs.k8s.io=old-k8s-version-507108 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:49:13.834055  741534 oci.go:103] Successfully created a docker volume old-k8s-version-507108
	I1206 09:49:13.834165  741534 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-507108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-507108 --entrypoint /usr/bin/test -v old-k8s-version-507108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:49:14.233819  741534 oci.go:107] Successfully prepared a docker volume old-k8s-version-507108
	I1206 09:49:14.233906  741534 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:49:14.233922  741534 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:49:14.234006  741534 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-507108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:49:17.220757  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:17.221211  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:49:17.221274  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:17.221338  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:17.257241  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:17.257263  725997 cri.go:89] found id: ""
	I1206 09:49:17.257286  725997 logs.go:282] 1 containers: [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:17.257338  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:17.261153  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:17.261227  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:17.296916  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:17.296937  725997 cri.go:89] found id: ""
	I1206 09:49:17.296945  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:17.297007  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:17.300929  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:17.300984  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:17.335418  725997 cri.go:89] found id: ""
	I1206 09:49:17.335442  725997 logs.go:282] 0 containers: []
	W1206 09:49:17.335452  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:17.335489  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:17.335541  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:17.368661  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:17.368687  725997 cri.go:89] found id: ""
	I1206 09:49:17.368698  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:17.368759  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:17.372431  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:17.372540  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:17.406678  725997 cri.go:89] found id: ""
	I1206 09:49:17.406711  725997 logs.go:282] 0 containers: []
	W1206 09:49:17.406724  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:17.406745  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:17.406808  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:17.440556  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:17.440586  725997 cri.go:89] found id: "8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:17.440593  725997 cri.go:89] found id: ""
	I1206 09:49:17.440605  725997 logs.go:282] 2 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71]
	I1206 09:49:17.440669  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:17.444298  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:17.447882  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:17.447949  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:17.481335  725997 cri.go:89] found id: ""
	I1206 09:49:17.481366  725997 logs.go:282] 0 containers: []
	W1206 09:49:17.481381  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:17.481389  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:17.481448  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:17.515824  725997 cri.go:89] found id: ""
	I1206 09:49:17.515851  725997 logs.go:282] 0 containers: []
	W1206 09:49:17.515859  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:17.515874  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:17.515890  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:17.536017  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:17.536049  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:17.597299  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:17.597323  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:17.597336  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:17.634692  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:17.634722  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:17.669408  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:17.669445  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:17.731879  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:17.731917  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:17.766599  725997 logs.go:123] Gathering logs for kube-controller-manager [8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71] ...
	I1206 09:49:17.766633  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:17.802764  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:17.802793  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:17.836705  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:17.836737  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:17.908409  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:17.908443  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:20.448624  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:20.449145  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:49:20.449223  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:20.449282  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:20.484717  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:20.484740  725997 cri.go:89] found id: ""
	I1206 09:49:20.484749  725997 logs.go:282] 1 containers: [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:20.484811  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:20.488791  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:20.488877  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:20.523929  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:20.523950  725997 cri.go:89] found id: ""
	I1206 09:49:20.523959  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:20.524007  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:20.528121  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:20.528199  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:20.562539  725997 cri.go:89] found id: ""
	I1206 09:49:20.562567  725997 logs.go:282] 0 containers: []
	W1206 09:49:20.562578  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:20.562586  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:20.562655  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:20.597910  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:20.597937  725997 cri.go:89] found id: ""
	I1206 09:49:20.597948  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:20.598002  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:20.601981  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:20.602063  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:20.637013  725997 cri.go:89] found id: ""
	I1206 09:49:20.637045  725997 logs.go:282] 0 containers: []
	W1206 09:49:20.637053  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:20.637060  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:20.637119  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:18.109095  714616 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062444953s)
	W1206 09:49:18.109142  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1206 09:49:18.109152  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:18.109167  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:18.141719  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:18.141750  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:18.168959  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:18.168989  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:18.187423  714616 logs.go:123] Gathering logs for kube-apiserver [f0d7825ad57cf08165cfa2196dd1b71023d753a7ad6d51fc15c8c198202d8e71] ...
	I1206 09:49:18.187464  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0d7825ad57cf08165cfa2196dd1b71023d753a7ad6d51fc15c8c198202d8e71"
	I1206 09:49:18.217664  714616 logs.go:123] Gathering logs for kube-controller-manager [10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc] ...
	I1206 09:49:18.217694  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:18.245637  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:18.245666  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:18.300037  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:18.300076  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:18.331298  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:18.331326  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:20.904522  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:21.270369  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:40616->192.168.103.2:8443: read: connection reset by peer
	I1206 09:49:21.270487  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:21.270591  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:21.299569  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:21.299598  714616 cri.go:89] found id: "f0d7825ad57cf08165cfa2196dd1b71023d753a7ad6d51fc15c8c198202d8e71"
	I1206 09:49:21.299605  714616 cri.go:89] found id: ""
	I1206 09:49:21.299615  714616 logs.go:282] 2 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a f0d7825ad57cf08165cfa2196dd1b71023d753a7ad6d51fc15c8c198202d8e71]
	I1206 09:49:21.299670  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:21.303802  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:21.307679  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:21.307742  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:21.335138  714616 cri.go:89] found id: ""
	I1206 09:49:21.335172  714616 logs.go:282] 0 containers: []
	W1206 09:49:21.335181  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:21.335188  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:21.335245  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:21.364826  714616 cri.go:89] found id: ""
	I1206 09:49:21.364851  714616 logs.go:282] 0 containers: []
	W1206 09:49:21.364859  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:21.364866  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:21.364924  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:21.392788  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:21.392816  714616 cri.go:89] found id: ""
	I1206 09:49:21.392827  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:21.392880  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:21.396921  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:21.396981  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:21.425255  714616 cri.go:89] found id: ""
	I1206 09:49:21.425280  714616 logs.go:282] 0 containers: []
	W1206 09:49:21.425288  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:21.425294  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:21.425348  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:21.454552  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:21.454574  714616 cri.go:89] found id: "10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:21.454579  714616 cri.go:89] found id: ""
	I1206 09:49:21.454588  714616 logs.go:282] 2 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc]
	I1206 09:49:21.454639  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:21.458731  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:21.462351  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:21.462418  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:21.490561  714616 cri.go:89] found id: ""
	I1206 09:49:21.490592  714616 logs.go:282] 0 containers: []
	W1206 09:49:21.490603  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:21.490612  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:21.490673  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:21.519527  714616 cri.go:89] found id: ""
	I1206 09:49:21.519562  714616 logs.go:282] 0 containers: []
	W1206 09:49:21.519574  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:21.519596  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:21.519614  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:21.546926  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:21.546952  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:21.621346  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:21.621386  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:21.654339  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:21.654368  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:21.682100  714616 logs.go:123] Gathering logs for kube-controller-manager [10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc] ...
	I1206 09:49:21.682129  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:21.710230  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:21.710267  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:19.182332  741534 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-507108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.948260141s)
	I1206 09:49:19.182366  741534 kic.go:203] duration metric: took 4.94844013s to extract preloaded images to volume ...
	W1206 09:49:19.182500  741534 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:49:19.182539  741534 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:49:19.182580  741534 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:49:19.238799  741534 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-507108 --name old-k8s-version-507108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-507108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-507108 --network old-k8s-version-507108 --ip 192.168.85.2 --volume old-k8s-version-507108:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:49:19.512808  741534 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Running}}
	I1206 09:49:19.530922  741534 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Status}}
	I1206 09:49:19.548725  741534 cli_runner.go:164] Run: docker exec old-k8s-version-507108 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:49:19.597869  741534 oci.go:144] the created container "old-k8s-version-507108" has a running status.
	I1206 09:49:19.597898  741534 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa...
	I1206 09:49:19.618422  741534 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:49:19.644046  741534 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Status}}
	I1206 09:49:19.665262  741534 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:49:19.665285  741534 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-507108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:49:19.718829  741534 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Status}}
	I1206 09:49:19.739802  741534 machine.go:94] provisionDockerMachine start ...
	I1206 09:49:19.739914  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:19.761813  741534 main.go:143] libmachine: Using SSH client type: native
	I1206 09:49:19.762066  741534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1206 09:49:19.762080  741534 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:49:19.762711  741534 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35848->127.0.0.1:33181: read: connection reset by peer
	I1206 09:49:22.892787  741534 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-507108
	
	I1206 09:49:22.892832  741534 ubuntu.go:182] provisioning hostname "old-k8s-version-507108"
	I1206 09:49:22.892912  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:22.911285  741534 main.go:143] libmachine: Using SSH client type: native
	I1206 09:49:22.911661  741534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1206 09:49:22.911687  741534 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-507108 && echo "old-k8s-version-507108" | sudo tee /etc/hostname
	I1206 09:49:23.053843  741534 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-507108
	
	I1206 09:49:23.053942  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:23.073150  741534 main.go:143] libmachine: Using SSH client type: native
	I1206 09:49:23.073452  741534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1206 09:49:23.073498  741534 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-507108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-507108/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-507108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:49:23.202429  741534 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:49:23.202518  741534 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:49:23.202560  741534 ubuntu.go:190] setting up certificates
	I1206 09:49:23.202575  741534 provision.go:84] configureAuth start
	I1206 09:49:23.202667  741534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-507108
	I1206 09:49:23.220585  741534 provision.go:143] copyHostCerts
	I1206 09:49:23.220646  741534 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:49:23.220663  741534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:49:23.220743  741534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:49:23.220851  741534 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:49:23.220865  741534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:49:23.220918  741534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:49:23.220994  741534 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:49:23.221005  741534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:49:23.221044  741534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:49:23.221115  741534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-507108 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-507108]
	I1206 09:49:23.486389  741534 provision.go:177] copyRemoteCerts
	I1206 09:49:23.486490  741534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:49:23.486541  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:23.504322  741534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:49:23.599129  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:49:23.618803  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:49:23.636262  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 09:49:23.653929  741534 provision.go:87] duration metric: took 451.335183ms to configureAuth
	I1206 09:49:23.653955  741534 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:49:23.654101  741534 config.go:182] Loaded profile config "old-k8s-version-507108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:49:23.654199  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:23.671855  741534 main.go:143] libmachine: Using SSH client type: native
	I1206 09:49:23.672066  741534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1206 09:49:23.672083  741534 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:49:23.952782  741534 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:49:23.952810  741534 machine.go:97] duration metric: took 4.212983173s to provisionDockerMachine
	I1206 09:49:23.952824  741534 client.go:176] duration metric: took 10.259657568s to LocalClient.Create
	I1206 09:49:23.952849  741534 start.go:167] duration metric: took 10.259721124s to libmachine.API.Create "old-k8s-version-507108"
	I1206 09:49:23.952872  741534 start.go:293] postStartSetup for "old-k8s-version-507108" (driver="docker")
	I1206 09:49:23.952886  741534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:49:23.952980  741534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:49:23.953038  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:23.973180  741534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:49:24.070597  741534 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:49:24.074314  741534 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:49:24.074342  741534 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:49:24.074354  741534 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:49:24.074412  741534 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:49:24.074536  741534 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:49:24.074651  741534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:49:24.082711  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:49:24.104854  741534 start.go:296] duration metric: took 151.96408ms for postStartSetup
	I1206 09:49:24.105196  741534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-507108
	I1206 09:49:24.125958  741534 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/config.json ...
	I1206 09:49:24.126237  741534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:49:24.126294  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:24.143792  741534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:49:24.238803  741534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:49:24.243647  741534 start.go:128] duration metric: took 10.552356102s to createHost
	I1206 09:49:24.243673  741534 start.go:83] releasing machines lock for "old-k8s-version-507108", held for 10.552499754s
	I1206 09:49:24.243739  741534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-507108
	I1206 09:49:24.262978  741534 ssh_runner.go:195] Run: cat /version.json
	I1206 09:49:24.263030  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:24.263046  741534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:49:24.263111  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:24.283486  741534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:49:24.283698  741534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:49:24.380349  741534 ssh_runner.go:195] Run: systemctl --version
	I1206 09:49:24.443083  741534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:49:24.486680  741534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:49:24.491598  741534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:49:24.491656  741534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:49:24.518345  741534 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:49:24.518378  741534 start.go:496] detecting cgroup driver to use...
	I1206 09:49:24.518418  741534 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:49:24.518496  741534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:49:24.535860  741534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:49:24.548824  741534 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:49:24.548872  741534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:49:24.565925  741534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:49:24.584853  741534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:49:24.676656  741534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:49:24.773093  741534 docker.go:234] disabling docker service ...
	I1206 09:49:24.773163  741534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:49:24.795738  741534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:49:24.808944  741534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:49:24.907895  741534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:49:25.000147  741534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:49:25.012793  741534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:49:25.027024  741534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 09:49:25.027094  741534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:49:25.037499  741534 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:49:25.037567  741534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:49:25.048137  741534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:49:25.057186  741534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:49:25.066003  741534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:49:25.074185  741534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:49:25.082887  741534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:49:25.096555  741534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:49:25.105321  741534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:49:25.112955  741534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:49:25.120799  741534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:49:25.199991  741534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:49:25.341905  741534 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:49:25.341994  741534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:49:25.346090  741534 start.go:564] Will wait 60s for crictl version
	I1206 09:49:25.346148  741534 ssh_runner.go:195] Run: which crictl
	I1206 09:49:25.349588  741534 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:49:25.375938  741534 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:49:25.376028  741534 ssh_runner.go:195] Run: crio --version
	I1206 09:49:25.404105  741534 ssh_runner.go:195] Run: crio --version
	I1206 09:49:25.433847  741534 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1206 09:49:20.672297  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:20.672321  725997 cri.go:89] found id: "8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:20.672325  725997 cri.go:89] found id: ""
	I1206 09:49:20.672334  725997 logs.go:282] 2 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71]
	I1206 09:49:20.672385  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:20.676357  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:20.680095  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:20.680164  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:20.714000  725997 cri.go:89] found id: ""
	I1206 09:49:20.714028  725997 logs.go:282] 0 containers: []
	W1206 09:49:20.714039  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:20.714048  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:20.714122  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:20.750437  725997 cri.go:89] found id: ""
	I1206 09:49:20.750490  725997 logs.go:282] 0 containers: []
	W1206 09:49:20.750503  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:20.750535  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:20.750563  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:20.785605  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:20.785640  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:20.820399  725997 logs.go:123] Gathering logs for kube-controller-manager [8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71] ...
	I1206 09:49:20.820428  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:20.856987  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:20.857014  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:20.877057  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:20.877093  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:20.937149  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:20.937178  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:20.937213  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:20.976000  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:20.976032  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:21.037331  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:21.037375  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:21.071407  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:21.071444  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:21.110336  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:21.110367  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:23.683501  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:23.683873  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:49:23.683926  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:23.683976  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:23.719913  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:23.719940  725997 cri.go:89] found id: ""
	I1206 09:49:23.719951  725997 logs.go:282] 1 containers: [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:23.720019  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:23.724143  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:23.724219  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:23.759648  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:23.759683  725997 cri.go:89] found id: ""
	I1206 09:49:23.759695  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:23.759794  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:23.763569  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:23.763638  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:23.799368  725997 cri.go:89] found id: ""
	I1206 09:49:23.799392  725997 logs.go:282] 0 containers: []
	W1206 09:49:23.799399  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:23.799405  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:23.799476  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:23.834398  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:23.834419  725997 cri.go:89] found id: ""
	I1206 09:49:23.834429  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:23.834511  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:23.838630  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:23.838706  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:23.875198  725997 cri.go:89] found id: ""
	I1206 09:49:23.875233  725997 logs.go:282] 0 containers: []
	W1206 09:49:23.875242  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:23.875248  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:23.875311  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:23.914152  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:23.914180  725997 cri.go:89] found id: "8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:23.914188  725997 cri.go:89] found id: ""
	I1206 09:49:23.914199  725997 logs.go:282] 2 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71]
	I1206 09:49:23.914265  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:23.918499  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:23.922152  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:23.922219  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:23.958936  725997 cri.go:89] found id: ""
	I1206 09:49:23.958960  725997 logs.go:282] 0 containers: []
	W1206 09:49:23.958967  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:23.958973  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:23.959029  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:23.994350  725997 cri.go:89] found id: ""
	I1206 09:49:23.994377  725997 logs.go:282] 0 containers: []
	W1206 09:49:23.994387  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:23.994408  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:23.994425  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:24.056910  725997 logs.go:123] Gathering logs for kube-controller-manager [8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71] ...
	I1206 09:49:24.056951  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8870e4b1f165cf29ef68893c27619c048dbcdbb3ce38a3ead99209410e047e71"
	I1206 09:49:24.093976  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:24.094017  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:24.134278  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:24.134308  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:24.212583  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:24.212616  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:24.233264  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:24.233294  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:24.300646  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:24.300675  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:24.300691  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:24.339756  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:24.339790  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:24.374179  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:24.374206  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:24.417447  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:24.417492  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:21.763681  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:21.763718  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:21.796578  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:21.796607  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:21.815820  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:21.815859  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:21.873879  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:21.873902  714616 logs.go:123] Gathering logs for kube-apiserver [f0d7825ad57cf08165cfa2196dd1b71023d753a7ad6d51fc15c8c198202d8e71] ...
	I1206 09:49:21.873916  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0d7825ad57cf08165cfa2196dd1b71023d753a7ad6d51fc15c8c198202d8e71"
	I1206 09:49:24.411132  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:24.411675  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:24.411743  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:24.411810  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:24.443449  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:24.443492  714616 cri.go:89] found id: ""
	I1206 09:49:24.443503  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:24.443564  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:24.447824  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:24.447885  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:24.479735  714616 cri.go:89] found id: ""
	I1206 09:49:24.479762  714616 logs.go:282] 0 containers: []
	W1206 09:49:24.479773  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:24.479781  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:24.479844  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:24.510503  714616 cri.go:89] found id: ""
	I1206 09:49:24.510528  714616 logs.go:282] 0 containers: []
	W1206 09:49:24.510539  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:24.510547  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:24.510613  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:24.539914  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:24.539937  714616 cri.go:89] found id: ""
	I1206 09:49:24.539948  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:24.540008  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:24.543860  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:24.543918  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:24.572782  714616 cri.go:89] found id: ""
	I1206 09:49:24.572811  714616 logs.go:282] 0 containers: []
	W1206 09:49:24.572819  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:24.572827  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:24.572883  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:24.600630  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:24.600652  714616 cri.go:89] found id: "10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:24.600658  714616 cri.go:89] found id: ""
	I1206 09:49:24.600668  714616 logs.go:282] 2 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc]
	I1206 09:49:24.600729  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:24.605059  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:24.608802  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:24.608865  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:24.642383  714616 cri.go:89] found id: ""
	I1206 09:49:24.642414  714616 logs.go:282] 0 containers: []
	W1206 09:49:24.642427  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:24.642436  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:24.642520  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:24.669329  714616 cri.go:89] found id: ""
	I1206 09:49:24.669353  714616 logs.go:282] 0 containers: []
	W1206 09:49:24.669363  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:24.669383  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:24.669398  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:24.697115  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:24.697145  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:24.731363  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:24.731396  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:24.762392  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:24.762431  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:24.856292  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:24.856345  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:24.876678  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:24.876719  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:24.933710  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:24.933734  714616 logs.go:123] Gathering logs for kube-controller-manager [10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc] ...
	I1206 09:49:24.933749  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:24.967021  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:24.967049  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:25.021645  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:25.021675  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:25.434932  741534 cli_runner.go:164] Run: docker network inspect old-k8s-version-507108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:49:25.451771  741534 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:49:25.455991  741534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:49:25.466252  741534 kubeadm.go:884] updating cluster {Name:old-k8s-version-507108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-507108 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:49:25.466373  741534 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:49:25.466416  741534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:49:25.497753  741534 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:49:25.497773  741534 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:49:25.497817  741534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:49:25.523914  741534 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:49:25.523936  741534 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:49:25.523945  741534 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1206 09:49:25.524043  741534 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-507108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-507108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:49:25.524116  741534 ssh_runner.go:195] Run: crio config
	I1206 09:49:25.570691  741534 cni.go:84] Creating CNI manager for ""
	I1206 09:49:25.570712  741534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:49:25.570730  741534 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:49:25.570753  741534 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-507108 NodeName:old-k8s-version-507108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:49:25.570871  741534 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-507108"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:49:25.570935  741534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1206 09:49:25.579289  741534 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:49:25.579343  741534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:49:25.587062  741534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1206 09:49:25.600319  741534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:49:25.615660  741534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1206 09:49:25.628537  741534 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:49:25.632779  741534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:49:25.643050  741534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:49:25.723179  741534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:49:25.748999  741534 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108 for IP: 192.168.85.2
	I1206 09:49:25.749031  741534 certs.go:195] generating shared ca certs ...
	I1206 09:49:25.749060  741534 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:25.749242  741534 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:49:25.749294  741534 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:49:25.749304  741534 certs.go:257] generating profile certs ...
	I1206 09:49:25.749361  741534 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.key
	I1206 09:49:25.749376  741534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt with IP's: []
	I1206 09:49:25.800850  741534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt ...
	I1206 09:49:25.800878  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: {Name:mk3f5a79c46afb7ea98598641e36731940dbc738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:25.801075  741534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.key ...
	I1206 09:49:25.801093  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.key: {Name:mk67a259b5d08610c1ac66babd8d1aea9cffe89d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:25.801207  741534 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.key.8d13a6be
	I1206 09:49:25.801227  741534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.crt.8d13a6be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:49:25.866528  741534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.crt.8d13a6be ...
	I1206 09:49:25.866555  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.crt.8d13a6be: {Name:mk0ea38e8ad3060cecc5bdf2b4ca0caad6912b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:25.866750  741534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.key.8d13a6be ...
	I1206 09:49:25.866772  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.key.8d13a6be: {Name:mk3b82dead1d47af27373584115eace906e982cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:25.866860  741534 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.crt.8d13a6be -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.crt
	I1206 09:49:25.866955  741534 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.key.8d13a6be -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.key
	I1206 09:49:25.867021  741534 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.key
	I1206 09:49:25.867037  741534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.crt with IP's: []
	I1206 09:49:25.983606  741534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.crt ...
	I1206 09:49:25.983635  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.crt: {Name:mkc7ebeb7a8b36d35b326cfc86214802e1e019ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:25.983801  741534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.key ...
	I1206 09:49:25.983813  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.key: {Name:mk3a1c774e063e2dced8053959f9682aefe2c977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:25.983977  741534 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:49:25.984018  741534 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:49:25.984029  741534 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:49:25.984059  741534 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:49:25.984090  741534 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:49:25.984113  741534 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:49:25.984152  741534 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:49:25.984816  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:49:26.004111  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:49:26.021819  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:49:26.039482  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:49:26.057448  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1206 09:49:26.075384  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:49:26.092803  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:49:26.109913  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:49:26.127210  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:49:26.146134  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:49:26.163383  741534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:49:26.180497  741534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:49:26.192817  741534 ssh_runner.go:195] Run: openssl version
	I1206 09:49:26.199075  741534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:49:26.206408  741534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:49:26.213695  741534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:49:26.217278  741534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:49:26.217317  741534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:49:26.253029  741534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:49:26.261410  741534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:49:26.268797  741534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:26.276582  741534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:49:26.284733  741534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:26.288772  741534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:26.288823  741534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:26.326554  741534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:49:26.334799  741534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:49:26.342152  741534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:49:26.349314  741534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:49:26.356441  741534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:49:26.359923  741534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:49:26.359970  741534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:49:26.394518  741534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:49:26.402403  741534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:49:26.410232  741534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:49:26.414159  741534 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:49:26.414216  741534 kubeadm.go:401] StartCluster: {Name:old-k8s-version-507108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-507108 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:49:26.414282  741534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:49:26.414325  741534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:49:26.442805  741534 cri.go:89] found id: ""
	I1206 09:49:26.442873  741534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:49:26.451169  741534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:49:26.459176  741534 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:49:26.459219  741534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:49:26.467475  741534 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:49:26.467492  741534 kubeadm.go:158] found existing configuration files:
	
	I1206 09:49:26.467537  741534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:49:26.475175  741534 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:49:26.475228  741534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:49:26.482516  741534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:49:26.490656  741534 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:49:26.490699  741534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:49:26.497798  741534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:49:26.505156  741534 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:49:26.505211  741534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:49:26.512358  741534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:49:26.520231  741534 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:49:26.520276  741534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:49:26.527592  741534 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:49:26.572611  741534 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1206 09:49:26.572667  741534 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:49:26.610787  741534 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:49:26.610871  741534 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:49:26.610912  741534 kubeadm.go:319] OS: Linux
	I1206 09:49:26.610964  741534 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:49:26.611027  741534 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:49:26.611082  741534 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:49:26.611165  741534 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:49:26.611254  741534 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:49:26.611333  741534 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:49:26.611401  741534 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:49:26.611469  741534 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:49:26.681783  741534 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:49:26.681935  741534 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:49:26.682041  741534 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 09:49:26.824741  741534 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:49:26.826862  741534 out.go:252]   - Generating certificates and keys ...
	I1206 09:49:26.826945  741534 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:49:26.827031  741534 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:49:27.204954  741534 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:49:27.552910  741534 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:49:27.939440  741534 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:49:28.021382  741534 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:49:28.308302  741534 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:49:28.308467  741534 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-507108] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:49:28.624115  741534 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:49:28.624300  741534 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-507108] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:49:29.057382  741534 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:49:29.167816  741534 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:49:29.432091  741534 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:49:29.432286  741534 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:49:29.661335  741534 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:49:29.947400  741534 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:49:30.025274  741534 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:49:30.155872  741534 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:49:30.156345  741534 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:49:30.161119  741534 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:49:26.958551  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:26.959066  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:49:26.959133  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:26.959203  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:26.994330  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:26.994353  725997 cri.go:89] found id: ""
	I1206 09:49:26.994363  725997 logs.go:282] 1 containers: [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:26.994428  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:26.998264  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:26.998328  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:27.032616  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:27.032637  725997 cri.go:89] found id: ""
	I1206 09:49:27.032648  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:27.032706  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:27.036295  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:27.036352  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:27.070914  725997 cri.go:89] found id: ""
	I1206 09:49:27.070942  725997 logs.go:282] 0 containers: []
	W1206 09:49:27.070954  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:27.070963  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:27.071017  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:27.105051  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:27.105076  725997 cri.go:89] found id: ""
	I1206 09:49:27.105089  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:27.105155  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:27.108959  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:27.109019  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:27.143668  725997 cri.go:89] found id: ""
	I1206 09:49:27.143696  725997 logs.go:282] 0 containers: []
	W1206 09:49:27.143707  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:27.143714  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:27.143776  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:27.185411  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:27.185435  725997 cri.go:89] found id: ""
	I1206 09:49:27.185446  725997 logs.go:282] 1 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:49:27.185526  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:27.189269  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:27.189328  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:27.224067  725997 cri.go:89] found id: ""
	I1206 09:49:27.224094  725997 logs.go:282] 0 containers: []
	W1206 09:49:27.224105  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:27.224113  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:27.224170  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:27.258673  725997 cri.go:89] found id: ""
	I1206 09:49:27.258699  725997 logs.go:282] 0 containers: []
	W1206 09:49:27.258707  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:27.258725  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:27.258740  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:27.294290  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:27.294320  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:27.387522  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:27.387558  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:27.408747  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:27.408776  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:27.446365  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:27.446394  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:27.516870  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:27.516904  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:27.551659  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:27.551689  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:27.595718  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:27.595747  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:27.662567  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:27.662589  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:27.662605  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:30.202000  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:30.202476  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:49:30.202561  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:30.202624  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:30.242279  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:30.242302  725997 cri.go:89] found id: ""
	I1206 09:49:30.242312  725997 logs.go:282] 1 containers: [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:30.242376  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.246333  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:30.246399  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:30.281913  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:30.281936  725997 cri.go:89] found id: ""
	I1206 09:49:30.281944  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:30.281993  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.285872  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:30.285945  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:30.325511  725997 cri.go:89] found id: ""
	I1206 09:49:30.325536  725997 logs.go:282] 0 containers: []
	W1206 09:49:30.325544  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:30.325551  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:30.325601  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:30.362568  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:30.362590  725997 cri.go:89] found id: ""
	I1206 09:49:30.362600  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:30.362661  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.366617  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:30.366698  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:30.403158  725997 cri.go:89] found id: ""
	I1206 09:49:30.403189  725997 logs.go:282] 0 containers: []
	W1206 09:49:30.403201  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:30.403209  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:30.403269  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:30.439248  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:30.439274  725997 cri.go:89] found id: ""
	I1206 09:49:30.439285  725997 logs.go:282] 1 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:49:30.439340  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.443253  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:30.443321  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:30.479883  725997 cri.go:89] found id: ""
	I1206 09:49:30.479914  725997 logs.go:282] 0 containers: []
	W1206 09:49:30.479926  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:30.479934  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:30.479998  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:30.518414  725997 cri.go:89] found id: ""
	I1206 09:49:30.518444  725997 logs.go:282] 0 containers: []
	W1206 09:49:30.518470  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:30.518494  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:30.518512  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:30.614510  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:30.614553  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:30.635777  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:30.635810  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:27.557651  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:27.558074  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:27.558139  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:27.558194  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:27.586934  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:27.586957  714616 cri.go:89] found id: ""
	I1206 09:49:27.586968  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:27.587029  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:27.592704  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:27.592775  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:27.622289  714616 cri.go:89] found id: ""
	I1206 09:49:27.622314  714616 logs.go:282] 0 containers: []
	W1206 09:49:27.622324  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:27.622332  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:27.622391  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:27.653567  714616 cri.go:89] found id: ""
	I1206 09:49:27.653599  714616 logs.go:282] 0 containers: []
	W1206 09:49:27.653609  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:27.653618  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:27.653692  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:27.686238  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:27.686263  714616 cri.go:89] found id: ""
	I1206 09:49:27.686272  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:27.686321  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:27.690929  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:27.690996  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:27.723090  714616 cri.go:89] found id: ""
	I1206 09:49:27.723114  714616 logs.go:282] 0 containers: []
	W1206 09:49:27.723122  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:27.723128  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:27.723175  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:27.752603  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:27.752631  714616 cri.go:89] found id: "10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:27.752636  714616 cri.go:89] found id: ""
	I1206 09:49:27.752646  714616 logs.go:282] 2 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc]
	I1206 09:49:27.752696  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:27.756789  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:27.760430  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:27.760517  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:27.790437  714616 cri.go:89] found id: ""
	I1206 09:49:27.790472  714616 logs.go:282] 0 containers: []
	W1206 09:49:27.790483  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:27.790490  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:27.790544  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:27.817354  714616 cri.go:89] found id: ""
	I1206 09:49:27.817382  714616 logs.go:282] 0 containers: []
	W1206 09:49:27.817394  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:27.817416  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:27.817433  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:27.848217  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:27.848249  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:27.906684  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:27.906721  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:27.942233  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:27.942263  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:28.001998  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:28.002024  714616 logs.go:123] Gathering logs for kube-controller-manager [10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc] ...
	I1206 09:49:28.002039  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:28.033161  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:28.033190  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:28.117125  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:28.117169  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:28.135438  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:28.135481  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:28.168176  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:28.168203  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:30.699574  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:30.700050  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:30.700104  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:30.700161  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:30.731343  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:30.731369  714616 cri.go:89] found id: ""
	I1206 09:49:30.731379  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:30.731444  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.736088  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:30.736162  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:30.765637  714616 cri.go:89] found id: ""
	I1206 09:49:30.765670  714616 logs.go:282] 0 containers: []
	W1206 09:49:30.765682  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:30.765691  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:30.765752  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:30.796094  714616 cri.go:89] found id: ""
	I1206 09:49:30.796123  714616 logs.go:282] 0 containers: []
	W1206 09:49:30.796134  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:30.796143  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:30.796201  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:30.825121  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:30.825143  714616 cri.go:89] found id: ""
	I1206 09:49:30.825151  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:30.825200  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.829586  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:30.829666  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:30.862577  714616 cri.go:89] found id: ""
	I1206 09:49:30.862612  714616 logs.go:282] 0 containers: []
	W1206 09:49:30.862625  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:30.862634  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:30.862708  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:30.891921  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:30.891949  714616 cri.go:89] found id: "10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:30.891956  714616 cri.go:89] found id: ""
	I1206 09:49:30.891966  714616 logs.go:282] 2 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc]
	I1206 09:49:30.892028  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.896368  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:30.901018  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:30.901089  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:30.932294  714616 cri.go:89] found id: ""
	I1206 09:49:30.932325  714616 logs.go:282] 0 containers: []
	W1206 09:49:30.932339  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:30.932349  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:30.932415  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:30.963982  714616 cri.go:89] found id: ""
	I1206 09:49:30.964007  714616 logs.go:282] 0 containers: []
	W1206 09:49:30.964018  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:30.964037  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:30.964055  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:31.019614  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:31.019663  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:31.113928  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:31.113971  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:31.141051  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:31.141435  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:31.222507  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:31.222531  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:31.222550  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:31.258928  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:31.258961  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:31.297854  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:31.297893  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:31.329933  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:31.329964  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:31.359400  714616 logs.go:123] Gathering logs for kube-controller-manager [10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc] ...
	I1206 09:49:31.359433  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10312434aa90e79408694c04ac21a263331b46533d6e8ea3de6e81f231dc1afc"
	I1206 09:49:30.162549  741534 out.go:252]   - Booting up control plane ...
	I1206 09:49:30.162647  741534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:49:30.162733  741534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:49:30.163417  741534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:49:30.177414  741534 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:49:30.178268  741534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:49:30.178336  741534 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:49:30.287300  741534 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 09:49:34.289729  741534 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.002476 seconds
	I1206 09:49:34.289899  741534 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:49:34.302494  741534 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:49:34.820875  741534 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:49:34.821056  741534 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-507108 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:49:35.330153  741534 kubeadm.go:319] [bootstrap-token] Using token: h44s3v.67kzufxv1f2w26jh
	I1206 09:49:30.671670  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:30.671703  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:30.740060  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:30.740096  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:30.780740  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:30.780770  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:30.816191  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:30.816225  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:30.884231  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:30.884250  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:30.884265  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:30.927819  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:30.927853  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:33.470798  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:33.471275  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:49:33.471334  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:33.471400  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:33.506742  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:33.506766  725997 cri.go:89] found id: ""
	I1206 09:49:33.506776  725997 logs.go:282] 1 containers: [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:33.506837  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:33.511063  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:33.511124  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:33.545296  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:33.545319  725997 cri.go:89] found id: ""
	I1206 09:49:33.545329  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:33.545396  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:33.549221  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:33.549295  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:33.584217  725997 cri.go:89] found id: ""
	I1206 09:49:33.584243  725997 logs.go:282] 0 containers: []
	W1206 09:49:33.584252  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:33.584258  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:33.584318  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:33.619828  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:33.619849  725997 cri.go:89] found id: ""
	I1206 09:49:33.619859  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:33.619928  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:33.623881  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:33.623953  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:33.658478  725997 cri.go:89] found id: ""
	I1206 09:49:33.658510  725997 logs.go:282] 0 containers: []
	W1206 09:49:33.658521  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:33.658529  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:33.658588  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:33.696985  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:33.697009  725997 cri.go:89] found id: ""
	I1206 09:49:33.697017  725997 logs.go:282] 1 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:49:33.697078  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:33.701412  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:33.701501  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:33.741556  725997 cri.go:89] found id: ""
	I1206 09:49:33.741597  725997 logs.go:282] 0 containers: []
	W1206 09:49:33.741610  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:33.741620  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:33.741691  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:33.785609  725997 cri.go:89] found id: ""
	I1206 09:49:33.785637  725997 logs.go:282] 0 containers: []
	W1206 09:49:33.785647  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:33.785666  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:33.785682  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:33.880399  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:33.880432  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:33.958585  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:33.958611  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:33.958628  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:34.008006  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:34.008049  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:34.052473  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:34.052511  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:34.134691  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:34.134728  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:34.162898  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:34.162928  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:34.206371  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:34.206408  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:34.243040  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:34.243072  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:35.331283  741534 out.go:252]   - Configuring RBAC rules ...
	I1206 09:49:35.331428  741534 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:49:35.335699  741534 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:49:35.341702  741534 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:49:35.344229  741534 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:49:35.346884  741534 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:49:35.350315  741534 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:49:35.359192  741534 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:49:35.545078  741534 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:49:35.739946  741534 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:49:35.740789  741534 kubeadm.go:319] 
	I1206 09:49:35.740858  741534 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:49:35.740894  741534 kubeadm.go:319] 
	I1206 09:49:35.741019  741534 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:49:35.741046  741534 kubeadm.go:319] 
	I1206 09:49:35.741075  741534 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:49:35.741140  741534 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:49:35.741210  741534 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:49:35.741234  741534 kubeadm.go:319] 
	I1206 09:49:35.741303  741534 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:49:35.741311  741534 kubeadm.go:319] 
	I1206 09:49:35.741373  741534 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:49:35.741381  741534 kubeadm.go:319] 
	I1206 09:49:35.741442  741534 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:49:35.741566  741534 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:49:35.741651  741534 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:49:35.741670  741534 kubeadm.go:319] 
	I1206 09:49:35.741790  741534 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:49:35.741899  741534 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:49:35.741910  741534 kubeadm.go:319] 
	I1206 09:49:35.742030  741534 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h44s3v.67kzufxv1f2w26jh \
	I1206 09:49:35.742171  741534 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:49:35.742212  741534 kubeadm.go:319] 	--control-plane 
	I1206 09:49:35.742221  741534 kubeadm.go:319] 
	I1206 09:49:35.742320  741534 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:49:35.742333  741534 kubeadm.go:319] 
	I1206 09:49:35.742416  741534 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h44s3v.67kzufxv1f2w26jh \
	I1206 09:49:35.742537  741534 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:49:35.744712  741534 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:49:35.744857  741534 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:49:35.744898  741534 cni.go:84] Creating CNI manager for ""
	I1206 09:49:35.744912  741534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:49:35.746322  741534 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:49:33.888791  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:33.889230  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:33.889299  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:33.889362  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:33.926612  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:33.926638  714616 cri.go:89] found id: ""
	I1206 09:49:33.926649  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:33.926717  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:33.931989  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:33.932070  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:33.966728  714616 cri.go:89] found id: ""
	I1206 09:49:33.966752  714616 logs.go:282] 0 containers: []
	W1206 09:49:33.966760  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:33.966766  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:33.966830  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:34.001006  714616 cri.go:89] found id: ""
	I1206 09:49:34.001039  714616 logs.go:282] 0 containers: []
	W1206 09:49:34.001051  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:34.001060  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:34.001124  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:34.034601  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:34.034629  714616 cri.go:89] found id: ""
	I1206 09:49:34.034641  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:34.034709  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:34.039972  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:34.040052  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:34.075552  714616 cri.go:89] found id: ""
	I1206 09:49:34.075585  714616 logs.go:282] 0 containers: []
	W1206 09:49:34.075596  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:34.075604  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:34.075667  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:34.112355  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:34.112379  714616 cri.go:89] found id: ""
	I1206 09:49:34.112389  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:34.112450  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:34.117439  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:34.117531  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:34.152362  714616 cri.go:89] found id: ""
	I1206 09:49:34.152391  714616 logs.go:282] 0 containers: []
	W1206 09:49:34.152402  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:34.152410  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:34.152495  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:34.185417  714616 cri.go:89] found id: ""
	I1206 09:49:34.185448  714616 logs.go:282] 0 containers: []
	W1206 09:49:34.185488  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:34.185500  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:34.185519  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:34.222074  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:34.222109  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:34.250841  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:34.250869  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:34.280261  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:34.280305  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:34.340443  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:34.340490  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:34.373017  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:34.373046  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:34.453525  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:34.453570  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:34.473530  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:34.473564  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:34.530301  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:35.747385  741534 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:49:35.751807  741534 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1206 09:49:35.751833  741534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:49:35.765412  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:49:36.422494  741534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:49:36.422539  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:36.422602  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-507108 minikube.k8s.io/updated_at=2025_12_06T09_49_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=old-k8s-version-507108 minikube.k8s.io/primary=true
	I1206 09:49:36.433841  741534 ops.go:34] apiserver oom_adj: -16
	I1206 09:49:36.513202  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:37.014046  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:37.513664  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:38.013264  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:36.787819  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:37.031184  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:37.031641  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:37.031705  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:37.031762  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:37.061672  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:37.061694  714616 cri.go:89] found id: ""
	I1206 09:49:37.061702  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:37.061748  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:37.066179  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:37.066254  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:37.094199  714616 cri.go:89] found id: ""
	I1206 09:49:37.094239  714616 logs.go:282] 0 containers: []
	W1206 09:49:37.094251  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:37.094262  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:37.094316  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:37.121880  714616 cri.go:89] found id: ""
	I1206 09:49:37.121910  714616 logs.go:282] 0 containers: []
	W1206 09:49:37.121923  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:37.121932  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:37.121986  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:37.148649  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:37.148669  714616 cri.go:89] found id: ""
	I1206 09:49:37.148678  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:37.148728  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:37.152591  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:37.152665  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:37.179721  714616 cri.go:89] found id: ""
	I1206 09:49:37.179748  714616 logs.go:282] 0 containers: []
	W1206 09:49:37.179758  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:37.179766  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:37.179827  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:37.206869  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:37.206895  714616 cri.go:89] found id: ""
	I1206 09:49:37.206907  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:37.206963  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:37.210836  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:37.210898  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:37.236895  714616 cri.go:89] found id: ""
	I1206 09:49:37.236917  714616 logs.go:282] 0 containers: []
	W1206 09:49:37.236925  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:37.236930  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:37.236984  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:37.264510  714616 cri.go:89] found id: ""
	I1206 09:49:37.264533  714616 logs.go:282] 0 containers: []
	W1206 09:49:37.264541  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:37.264550  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:37.264562  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:37.315553  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:37.315584  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:37.346687  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:37.346717  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:37.428185  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:37.428219  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:37.447649  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:37.447684  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:37.505860  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:37.505879  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:37.505893  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:37.539256  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:37.539289  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:37.567359  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:37.567400  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:40.098530  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:40.099002  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:40.099064  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:40.099129  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:40.128559  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:40.128583  714616 cri.go:89] found id: ""
	I1206 09:49:40.128593  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:40.128652  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:40.132723  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:40.132783  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:40.160145  714616 cri.go:89] found id: ""
	I1206 09:49:40.160169  714616 logs.go:282] 0 containers: []
	W1206 09:49:40.160179  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:40.160185  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:40.160234  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:40.187200  714616 cri.go:89] found id: ""
	I1206 09:49:40.187230  714616 logs.go:282] 0 containers: []
	W1206 09:49:40.187247  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:40.187255  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:40.187312  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:40.215879  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:40.215901  714616 cri.go:89] found id: ""
	I1206 09:49:40.215910  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:40.215968  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:40.220502  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:40.220574  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:40.248438  714616 cri.go:89] found id: ""
	I1206 09:49:40.248486  714616 logs.go:282] 0 containers: []
	W1206 09:49:40.248507  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:40.248518  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:40.248579  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:40.276485  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:40.276514  714616 cri.go:89] found id: ""
	I1206 09:49:40.276526  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:40.276597  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:40.280972  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:40.281046  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:40.309097  714616 cri.go:89] found id: ""
	I1206 09:49:40.309126  714616 logs.go:282] 0 containers: []
	W1206 09:49:40.309137  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:40.309153  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:40.309223  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:40.338567  714616 cri.go:89] found id: ""
	I1206 09:49:40.338592  714616 logs.go:282] 0 containers: []
	W1206 09:49:40.338600  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:40.338611  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:40.338623  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:40.369810  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:40.369839  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:40.452541  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:40.452579  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:40.472062  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:40.472094  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:40.531993  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:40.532013  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:40.532029  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:40.565843  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:40.565887  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:40.598444  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:40.598497  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:40.628085  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:40.628122  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:38.514267  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:39.014267  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:39.513372  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:40.014048  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:40.513988  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:41.013422  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:41.513524  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:42.013991  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:42.513320  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:43.013376  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:41.788582  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 09:49:41.788651  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:41.788717  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:41.825241  725997 cri.go:89] found id: "cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:49:41.825270  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:41.825276  725997 cri.go:89] found id: ""
	I1206 09:49:41.825287  725997 logs.go:282] 2 containers: [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:41.825346  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:41.829387  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:41.833179  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:41.833248  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:41.867609  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:41.867631  725997 cri.go:89] found id: ""
	I1206 09:49:41.867640  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:41.867694  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:41.871534  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:41.871619  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:41.907048  725997 cri.go:89] found id: ""
	I1206 09:49:41.907082  725997 logs.go:282] 0 containers: []
	W1206 09:49:41.907106  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:41.907114  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:41.907166  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:41.942692  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:41.942718  725997 cri.go:89] found id: ""
	I1206 09:49:41.942726  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:41.942779  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:41.946775  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:41.946846  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:41.983990  725997 cri.go:89] found id: ""
	I1206 09:49:41.984026  725997 logs.go:282] 0 containers: []
	W1206 09:49:41.984038  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:41.984046  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:41.984102  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:42.020444  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:42.020505  725997 cri.go:89] found id: ""
	I1206 09:49:42.020571  725997 logs.go:282] 1 containers: [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:49:42.020644  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:42.024875  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:42.024950  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:42.067652  725997 cri.go:89] found id: ""
	I1206 09:49:42.067682  725997 logs.go:282] 0 containers: []
	W1206 09:49:42.067693  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:42.067704  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:42.067771  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:42.105364  725997 cri.go:89] found id: ""
	I1206 09:49:42.105392  725997 logs.go:282] 0 containers: []
	W1206 09:49:42.105402  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:42.105421  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:42.105435  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:42.184419  725997 logs.go:123] Gathering logs for kube-apiserver [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69] ...
	I1206 09:49:42.184464  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:49:42.222727  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:42.222760  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:42.258030  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:42.258066  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:42.319233  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:42.319269  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:42.356024  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:42.356051  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:42.393303  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:42.393339  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:42.433004  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:42.433032  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:42.454059  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:42.454096  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 09:49:43.183756  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:43.184220  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:43.184282  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:43.184350  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:43.212239  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:43.212267  714616 cri.go:89] found id: ""
	I1206 09:49:43.212277  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:43.212334  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:43.216566  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:43.216633  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:43.243509  714616 cri.go:89] found id: ""
	I1206 09:49:43.243536  714616 logs.go:282] 0 containers: []
	W1206 09:49:43.243544  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:43.243550  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:43.243599  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:43.272431  714616 cri.go:89] found id: ""
	I1206 09:49:43.272485  714616 logs.go:282] 0 containers: []
	W1206 09:49:43.272496  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:43.272506  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:43.272572  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:43.300271  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:43.300302  714616 cri.go:89] found id: ""
	I1206 09:49:43.300310  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:43.300377  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:43.304526  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:43.304597  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:43.332946  714616 cri.go:89] found id: ""
	I1206 09:49:43.332978  714616 logs.go:282] 0 containers: []
	W1206 09:49:43.332989  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:43.332995  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:43.333047  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:43.361176  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:43.361198  714616 cri.go:89] found id: ""
	I1206 09:49:43.361206  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:43.361259  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:43.365355  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:43.365415  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:43.394766  714616 cri.go:89] found id: ""
	I1206 09:49:43.394793  714616 logs.go:282] 0 containers: []
	W1206 09:49:43.394804  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:43.394812  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:43.394875  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:43.422259  714616 cri.go:89] found id: ""
	I1206 09:49:43.422290  714616 logs.go:282] 0 containers: []
	W1206 09:49:43.422302  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:43.422314  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:43.422330  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:43.440794  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:43.440821  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:43.495373  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:43.495395  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:43.495414  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:43.530701  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:43.530742  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:43.562478  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:43.562508  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:43.592388  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:43.592415  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:43.648830  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:43.648862  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:43.679255  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:43.679285  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:46.257098  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:46.257545  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:46.257612  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:46.257680  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:46.286766  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:46.286786  714616 cri.go:89] found id: ""
	I1206 09:49:46.286794  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:46.286848  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:46.290871  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:46.290932  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:46.318419  714616 cri.go:89] found id: ""
	I1206 09:49:46.318451  714616 logs.go:282] 0 containers: []
	W1206 09:49:46.318475  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:46.318483  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:46.318537  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:46.345619  714616 cri.go:89] found id: ""
	I1206 09:49:46.345647  714616 logs.go:282] 0 containers: []
	W1206 09:49:46.345657  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:46.345665  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:46.345725  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:46.374070  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:46.374092  714616 cri.go:89] found id: ""
	I1206 09:49:46.374104  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:46.374169  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:46.378165  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:46.378240  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:46.405092  714616 cri.go:89] found id: ""
	I1206 09:49:46.405117  714616 logs.go:282] 0 containers: []
	W1206 09:49:46.405127  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:46.405135  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:46.405191  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:46.432218  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:46.432249  714616 cri.go:89] found id: ""
	I1206 09:49:46.432258  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:46.432309  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:46.436549  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:46.436610  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:46.462890  714616 cri.go:89] found id: ""
	I1206 09:49:46.462922  714616 logs.go:282] 0 containers: []
	W1206 09:49:46.462930  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:46.462935  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:46.462980  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:46.489447  714616 cri.go:89] found id: ""
	I1206 09:49:46.489488  714616 logs.go:282] 0 containers: []
	W1206 09:49:46.489498  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:46.489509  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:46.489522  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:46.520532  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:46.520568  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:46.548964  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:46.548992  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:46.579212  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:46.579255  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:46.637996  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:46.638030  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:46.670415  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:46.670443  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:46.751984  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:46.752020  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:43.514321  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:44.014242  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:44.514204  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:45.013428  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:45.513516  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:46.013561  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:46.513960  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:47.013889  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:47.513836  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:48.013680  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:48.513917  741534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:49:48.582815  741534 kubeadm.go:1114] duration metric: took 12.160324173s to wait for elevateKubeSystemPrivileges
	I1206 09:49:48.582858  741534 kubeadm.go:403] duration metric: took 22.168646302s to StartCluster
	I1206 09:49:48.582882  741534 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:48.582959  741534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:49:48.584848  741534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:48.585062  741534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:49:48.585072  741534 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:49:48.585139  741534 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:49:48.585249  741534 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-507108"
	I1206 09:49:48.585272  741534 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-507108"
	I1206 09:49:48.585310  741534 host.go:66] Checking if "old-k8s-version-507108" exists ...
	I1206 09:49:48.585313  741534 config.go:182] Loaded profile config "old-k8s-version-507108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:49:48.585333  741534 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-507108"
	I1206 09:49:48.585387  741534 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-507108"
	I1206 09:49:48.585978  741534 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Status}}
	I1206 09:49:48.585984  741534 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Status}}
	I1206 09:49:48.586731  741534 out.go:179] * Verifying Kubernetes components...
	I1206 09:49:48.588512  741534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:49:48.610189  741534 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-507108"
	I1206 09:49:48.610231  741534 host.go:66] Checking if "old-k8s-version-507108" exists ...
	I1206 09:49:48.610599  741534 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Status}}
	I1206 09:49:48.612674  741534 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:49:48.613942  741534 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:49:48.613962  741534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:49:48.614004  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:48.634795  741534 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:49:48.634827  741534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:49:48.634896  741534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:49:48.645699  741534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:49:48.658352  741534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:49:48.680123  741534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:49:48.729340  741534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:49:48.754758  741534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:49:48.764429  741534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:49:48.935139  741534 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1206 09:49:48.936513  741534 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-507108" to be "Ready" ...
	I1206 09:49:49.141753  741534 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:49:46.771916  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:46.771945  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:46.830013  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:49.330427  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:49.330952  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:49.331015  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:49.331081  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:49.359373  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:49.359393  714616 cri.go:89] found id: ""
	I1206 09:49:49.359401  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:49.359495  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:49.363846  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:49.363909  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:49.392374  714616 cri.go:89] found id: ""
	I1206 09:49:49.392399  714616 logs.go:282] 0 containers: []
	W1206 09:49:49.392406  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:49.392412  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:49.392488  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:49.420658  714616 cri.go:89] found id: ""
	I1206 09:49:49.420681  714616 logs.go:282] 0 containers: []
	W1206 09:49:49.420690  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:49.420695  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:49.420744  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:49.448660  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:49.448679  714616 cri.go:89] found id: ""
	I1206 09:49:49.448691  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:49.448739  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:49.452746  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:49.452814  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:49.479664  714616 cri.go:89] found id: ""
	I1206 09:49:49.479688  714616 logs.go:282] 0 containers: []
	W1206 09:49:49.479697  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:49.479703  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:49.479762  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:49.506138  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:49.506162  714616 cri.go:89] found id: ""
	I1206 09:49:49.506171  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:49.506228  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:49.510233  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:49.510287  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:49.536634  714616 cri.go:89] found id: ""
	I1206 09:49:49.536658  714616 logs.go:282] 0 containers: []
	W1206 09:49:49.536666  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:49.536673  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:49.536722  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:49.563862  714616 cri.go:89] found id: ""
	I1206 09:49:49.563896  714616 logs.go:282] 0 containers: []
	W1206 09:49:49.563908  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:49.563920  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:49.563936  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:49.617591  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:49.617626  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:49.650047  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:49.650080  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:49.732938  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:49.732978  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:49.752011  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:49.752039  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:49.809261  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:49.809288  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:49.809306  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:49.842474  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:49.842502  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:49.869772  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:49.869800  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:49.142652  741534 addons.go:530] duration metric: took 557.515974ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:49:49.439596  741534 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-507108" context rescaled to 1 replicas
	W1206 09:49:50.940116  741534 node_ready.go:57] node "old-k8s-version-507108" has "Ready":"False" status (will retry)
	W1206 09:49:53.439494  741534 node_ready.go:57] node "old-k8s-version-507108" has "Ready":"False" status (will retry)
	I1206 09:49:52.516797  725997 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062677239s)
	W1206 09:49:52.516844  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1206 09:49:52.516854  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:52.516874  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:55.062553  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:52.397702  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:52.398100  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:52.398157  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:52.398208  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:52.426099  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:52.426121  714616 cri.go:89] found id: ""
	I1206 09:49:52.426129  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:52.426178  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:52.430291  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:52.430347  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:52.457425  714616 cri.go:89] found id: ""
	I1206 09:49:52.457449  714616 logs.go:282] 0 containers: []
	W1206 09:49:52.457491  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:52.457498  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:52.457550  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:52.486104  714616 cri.go:89] found id: ""
	I1206 09:49:52.486148  714616 logs.go:282] 0 containers: []
	W1206 09:49:52.486159  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:52.486170  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:52.486230  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:52.512932  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:52.512957  714616 cri.go:89] found id: ""
	I1206 09:49:52.512966  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:52.513024  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:52.517296  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:52.517358  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:52.550322  714616 cri.go:89] found id: ""
	I1206 09:49:52.550348  714616 logs.go:282] 0 containers: []
	W1206 09:49:52.550356  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:52.550364  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:52.550423  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:52.577992  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:52.578019  714616 cri.go:89] found id: ""
	I1206 09:49:52.578031  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:52.578093  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:52.582033  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:52.582089  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:52.609044  714616 cri.go:89] found id: ""
	I1206 09:49:52.609079  714616 logs.go:282] 0 containers: []
	W1206 09:49:52.609093  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:52.609103  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:52.609165  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:52.635340  714616 cri.go:89] found id: ""
	I1206 09:49:52.635373  714616 logs.go:282] 0 containers: []
	W1206 09:49:52.635383  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:52.635396  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:52.635410  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:52.692012  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:52.692041  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:52.692055  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:52.722967  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:52.722997  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:52.751323  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:52.751351  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:52.778136  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:52.778165  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:52.833991  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:52.834024  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:52.864792  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:52.864837  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:52.946228  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:52.946266  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:55.468604  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:55.469027  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:55.469084  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:55.469134  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:55.497115  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:55.497138  714616 cri.go:89] found id: ""
	I1206 09:49:55.497148  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:55.497207  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:55.501181  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:55.501250  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:55.528077  714616 cri.go:89] found id: ""
	I1206 09:49:55.528103  714616 logs.go:282] 0 containers: []
	W1206 09:49:55.528110  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:55.528116  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:55.528171  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:55.554063  714616 cri.go:89] found id: ""
	I1206 09:49:55.554090  714616 logs.go:282] 0 containers: []
	W1206 09:49:55.554098  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:55.554104  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:55.554157  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:55.581032  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:55.581053  714616 cri.go:89] found id: ""
	I1206 09:49:55.581063  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:55.581124  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:55.585225  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:55.585282  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:55.612671  714616 cri.go:89] found id: ""
	I1206 09:49:55.612707  714616 logs.go:282] 0 containers: []
	W1206 09:49:55.612717  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:55.612726  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:55.612788  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:55.640209  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:55.640235  714616 cri.go:89] found id: ""
	I1206 09:49:55.640246  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:55.640326  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:55.644488  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:55.644572  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:55.671287  714616 cri.go:89] found id: ""
	I1206 09:49:55.671314  714616 logs.go:282] 0 containers: []
	W1206 09:49:55.671324  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:55.671332  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:55.671392  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:55.697530  714616 cri.go:89] found id: ""
	I1206 09:49:55.697561  714616 logs.go:282] 0 containers: []
	W1206 09:49:55.697572  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:55.697585  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:55.697598  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:55.749816  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:55.749852  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:55.781362  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:55.781390  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:55.863602  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:55.863639  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:55.882426  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:55.882452  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:55.939584  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:55.939610  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:55.939626  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:55.971888  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:55.971917  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:55.998352  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:55.998380  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	W1206 09:49:55.440016  741534 node_ready.go:57] node "old-k8s-version-507108" has "Ready":"False" status (will retry)
	W1206 09:49:57.440228  741534 node_ready.go:57] node "old-k8s-version-507108" has "Ready":"False" status (will retry)
	I1206 09:49:56.616631  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:40966->192.168.76.2:8443: read: connection reset by peer
	I1206 09:49:56.616721  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:56.616798  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:56.654426  725997 cri.go:89] found id: "cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:49:56.654447  725997 cri.go:89] found id: "62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:56.654468  725997 cri.go:89] found id: ""
	I1206 09:49:56.654480  725997 logs.go:282] 2 containers: [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e]
	I1206 09:49:56.654541  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:56.658696  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:56.662749  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:56.662803  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:56.701305  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:56.701329  725997 cri.go:89] found id: ""
	I1206 09:49:56.701339  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:49:56.701401  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:56.705122  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:56.705178  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:56.739262  725997 cri.go:89] found id: ""
	I1206 09:49:56.739287  725997 logs.go:282] 0 containers: []
	W1206 09:49:56.739295  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:49:56.739301  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:56.739353  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:56.781633  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:56.781658  725997 cri.go:89] found id: ""
	I1206 09:49:56.781667  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:49:56.781716  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:56.785474  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:56.785544  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:56.819786  725997 cri.go:89] found id: ""
	I1206 09:49:56.819813  725997 logs.go:282] 0 containers: []
	W1206 09:49:56.819824  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:56.819831  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:56.819891  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:56.856605  725997 cri.go:89] found id: "63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:49:56.856632  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:56.856639  725997 cri.go:89] found id: ""
	I1206 09:49:56.856647  725997 logs.go:282] 2 containers: [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:49:56.856695  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:56.860542  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:56.864186  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:56.864243  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:56.907131  725997 cri.go:89] found id: ""
	I1206 09:49:56.907157  725997 logs.go:282] 0 containers: []
	W1206 09:49:56.907164  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:56.907170  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:56.907228  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:56.943289  725997 cri.go:89] found id: ""
	I1206 09:49:56.943312  725997 logs.go:282] 0 containers: []
	W1206 09:49:56.943320  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:56.943329  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:49:56.943343  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:49:56.981419  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:56.981445  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:57.001199  725997 logs.go:123] Gathering logs for kube-apiserver [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69] ...
	I1206 09:49:57.001232  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:49:57.039989  725997 logs.go:123] Gathering logs for kube-apiserver [62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e] ...
	I1206 09:49:57.040022  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62fe06a84f42a103820db0bf071df0b318a582c5a66ab39fe2e327f3d312e38e"
	I1206 09:49:57.079438  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:49:57.079481  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:49:57.143948  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:57.143979  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:57.183803  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:49:57.183832  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:57.224140  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:57.224173  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:57.317262  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:57.317303  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:57.381729  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:57.381766  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:49:57.381783  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:49:57.419334  725997 logs.go:123] Gathering logs for kube-controller-manager [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a] ...
	I1206 09:49:57.419368  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:49:59.956042  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:49:59.956489  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:49:59.956541  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:59.956592  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:59.992018  725997 cri.go:89] found id: "cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:49:59.992044  725997 cri.go:89] found id: ""
	I1206 09:49:59.992053  725997 logs.go:282] 1 containers: [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69]
	I1206 09:49:59.992107  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:49:59.996103  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:59.996169  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:00.030016  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:00.030041  725997 cri.go:89] found id: ""
	I1206 09:50:00.030050  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:50:00.030112  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:00.033910  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:00.033964  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:00.069683  725997 cri.go:89] found id: ""
	I1206 09:50:00.069712  725997 logs.go:282] 0 containers: []
	W1206 09:50:00.069723  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:50:00.069731  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:00.069789  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:00.104118  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:00.104142  725997 cri.go:89] found id: ""
	I1206 09:50:00.104153  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:50:00.104224  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:00.108940  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:00.109022  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:00.144255  725997 cri.go:89] found id: ""
	I1206 09:50:00.144289  725997 logs.go:282] 0 containers: []
	W1206 09:50:00.144299  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:00.144308  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:00.144373  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:50:00.179948  725997 cri.go:89] found id: "63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:00.179972  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:50:00.179978  725997 cri.go:89] found id: ""
	I1206 09:50:00.179989  725997 logs.go:282] 2 containers: [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:50:00.180041  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:00.184121  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:00.188014  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:00.188070  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:00.222780  725997 cri.go:89] found id: ""
	I1206 09:50:00.222815  725997 logs.go:282] 0 containers: []
	W1206 09:50:00.222829  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:00.222838  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:00.222911  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:00.258512  725997 cri.go:89] found id: ""
	I1206 09:50:00.258547  725997 logs.go:282] 0 containers: []
	W1206 09:50:00.258558  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:00.258577  725997 logs.go:123] Gathering logs for kube-apiserver [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69] ...
	I1206 09:50:00.258592  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:50:00.296519  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:50:00.296549  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:00.332231  725997 logs.go:123] Gathering logs for kube-controller-manager [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a] ...
	I1206 09:50:00.332264  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:00.368527  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:50:00.368558  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:50:00.405545  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:00.405582  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:00.489574  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:00.489612  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:00.510167  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:00.510196  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:00.572749  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:00.572771  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:50:00.572788  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:00.638016  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:00.638052  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:58.525528  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:49:58.526004  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:49:58.526077  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:58.526142  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:58.555649  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:49:58.555671  714616 cri.go:89] found id: ""
	I1206 09:49:58.555680  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:49:58.555731  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:58.559942  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:49:58.560020  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:58.587940  714616 cri.go:89] found id: ""
	I1206 09:49:58.587969  714616 logs.go:282] 0 containers: []
	W1206 09:49:58.587980  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:49:58.587987  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:49:58.588046  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:58.615041  714616 cri.go:89] found id: ""
	I1206 09:49:58.615066  714616 logs.go:282] 0 containers: []
	W1206 09:49:58.615074  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:49:58.615080  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:58.615134  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:58.642818  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:58.642840  714616 cri.go:89] found id: ""
	I1206 09:49:58.642851  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:49:58.642912  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:58.646803  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:58.646869  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:58.674755  714616 cri.go:89] found id: ""
	I1206 09:49:58.674779  714616 logs.go:282] 0 containers: []
	W1206 09:49:58.674789  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:58.674797  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:58.674856  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:58.701851  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:58.701875  714616 cri.go:89] found id: ""
	I1206 09:49:58.701885  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:49:58.701941  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:49:58.705910  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:58.705981  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:58.731770  714616 cri.go:89] found id: ""
	I1206 09:49:58.731797  714616 logs.go:282] 0 containers: []
	W1206 09:49:58.731804  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:58.731811  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:58.731869  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:58.760282  714616 cri.go:89] found id: ""
	I1206 09:49:58.760306  714616 logs.go:282] 0 containers: []
	W1206 09:49:58.760314  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:58.760324  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:49:58.760339  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:49:58.787226  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:49:58.787261  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:49:58.814636  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:49:58.814663  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:49:58.866750  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:49:58.866783  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:58.896498  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:58.896524  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:58.983616  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:58.983654  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:49:59.003116  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:59.003145  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:59.060123  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:59.060146  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:49:59.060162  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:01.591647  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:50:01.592143  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:50:01.592205  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:50:01.592272  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:50:01.620962  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:01.620981  714616 cri.go:89] found id: ""
	I1206 09:50:01.620989  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:50:01.621050  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:01.625174  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:50:01.625259  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:01.654181  714616 cri.go:89] found id: ""
	I1206 09:50:01.654212  714616 logs.go:282] 0 containers: []
	W1206 09:50:01.654223  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:50:01.654231  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:01.654299  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:01.685073  714616 cri.go:89] found id: ""
	I1206 09:50:01.685100  714616 logs.go:282] 0 containers: []
	W1206 09:50:01.685111  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:50:01.685119  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:01.685184  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:01.714016  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:01.714042  714616 cri.go:89] found id: ""
	I1206 09:50:01.714051  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:50:01.714105  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:01.718325  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:01.718393  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:01.746735  714616 cri.go:89] found id: ""
	I1206 09:50:01.746768  714616 logs.go:282] 0 containers: []
	W1206 09:50:01.746780  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:01.746788  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:01.746852  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1206 09:49:59.939447  741534 node_ready.go:57] node "old-k8s-version-507108" has "Ready":"False" status (will retry)
	I1206 09:50:01.939836  741534 node_ready.go:49] node "old-k8s-version-507108" is "Ready"
	I1206 09:50:01.939865  741534 node_ready.go:38] duration metric: took 13.003322436s for node "old-k8s-version-507108" to be "Ready" ...
	I1206 09:50:01.939883  741534 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:50:01.939953  741534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:50:01.954446  741534 api_server.go:72] duration metric: took 13.369338986s to wait for apiserver process to appear ...
	I1206 09:50:01.954487  741534 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:50:01.954509  741534 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:50:01.959917  741534 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 09:50:01.961185  741534 api_server.go:141] control plane version: v1.28.0
	I1206 09:50:01.961215  741534 api_server.go:131] duration metric: took 6.718686ms to wait for apiserver health ...
	I1206 09:50:01.961233  741534 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:50:01.965141  741534 system_pods.go:59] 8 kube-system pods found
	I1206 09:50:01.965187  741534 system_pods.go:61] "coredns-5dd5756b68-qvppb" [2a83cabb-a34e-496d-a9cf-f2017553b4d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:50:01.965197  741534 system_pods.go:61] "etcd-old-k8s-version-507108" [b98d28d9-0821-4c34-9323-c707d562f258] Running
	I1206 09:50:01.965210  741534 system_pods.go:61] "kindnet-pdc9w" [6a26f6a1-0a33-4271-983c-5c8b7e00efe3] Running
	I1206 09:50:01.965216  741534 system_pods.go:61] "kube-apiserver-old-k8s-version-507108" [280ccf0b-5c23-47ed-a558-ded6886ccd72] Running
	I1206 09:50:01.965225  741534 system_pods.go:61] "kube-controller-manager-old-k8s-version-507108" [392e4a3f-5f64-45c1-91cd-497784c58953] Running
	I1206 09:50:01.965230  741534 system_pods.go:61] "kube-proxy-q6xpd" [38af9a91-0e42-4afe-a310-7858c5e1b946] Running
	I1206 09:50:01.965234  741534 system_pods.go:61] "kube-scheduler-old-k8s-version-507108" [47942056-bd44-4a7f-b663-940859a025d6] Running
	I1206 09:50:01.965238  741534 system_pods.go:61] "storage-provisioner" [4cb0587e-58b5-46f1-80da-ca3de4441ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:50:01.965250  741534 system_pods.go:74] duration metric: took 4.010474ms to wait for pod list to return data ...
	I1206 09:50:01.965263  741534 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:50:01.967660  741534 default_sa.go:45] found service account: "default"
	I1206 09:50:01.967685  741534 default_sa.go:55] duration metric: took 2.410992ms for default service account to be created ...
	I1206 09:50:01.967711  741534 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:50:01.971313  741534 system_pods.go:86] 8 kube-system pods found
	I1206 09:50:01.971341  741534 system_pods.go:89] "coredns-5dd5756b68-qvppb" [2a83cabb-a34e-496d-a9cf-f2017553b4d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:50:01.971347  741534 system_pods.go:89] "etcd-old-k8s-version-507108" [b98d28d9-0821-4c34-9323-c707d562f258] Running
	I1206 09:50:01.971353  741534 system_pods.go:89] "kindnet-pdc9w" [6a26f6a1-0a33-4271-983c-5c8b7e00efe3] Running
	I1206 09:50:01.971360  741534 system_pods.go:89] "kube-apiserver-old-k8s-version-507108" [280ccf0b-5c23-47ed-a558-ded6886ccd72] Running
	I1206 09:50:01.971363  741534 system_pods.go:89] "kube-controller-manager-old-k8s-version-507108" [392e4a3f-5f64-45c1-91cd-497784c58953] Running
	I1206 09:50:01.971367  741534 system_pods.go:89] "kube-proxy-q6xpd" [38af9a91-0e42-4afe-a310-7858c5e1b946] Running
	I1206 09:50:01.971370  741534 system_pods.go:89] "kube-scheduler-old-k8s-version-507108" [47942056-bd44-4a7f-b663-940859a025d6] Running
	I1206 09:50:01.971374  741534 system_pods.go:89] "storage-provisioner" [4cb0587e-58b5-46f1-80da-ca3de4441ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:50:01.971396  741534 retry.go:31] will retry after 261.558113ms: missing components: kube-dns
	I1206 09:50:02.238412  741534 system_pods.go:86] 8 kube-system pods found
	I1206 09:50:02.238481  741534 system_pods.go:89] "coredns-5dd5756b68-qvppb" [2a83cabb-a34e-496d-a9cf-f2017553b4d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:50:02.238498  741534 system_pods.go:89] "etcd-old-k8s-version-507108" [b98d28d9-0821-4c34-9323-c707d562f258] Running
	I1206 09:50:02.238514  741534 system_pods.go:89] "kindnet-pdc9w" [6a26f6a1-0a33-4271-983c-5c8b7e00efe3] Running
	I1206 09:50:02.238524  741534 system_pods.go:89] "kube-apiserver-old-k8s-version-507108" [280ccf0b-5c23-47ed-a558-ded6886ccd72] Running
	I1206 09:50:02.238530  741534 system_pods.go:89] "kube-controller-manager-old-k8s-version-507108" [392e4a3f-5f64-45c1-91cd-497784c58953] Running
	I1206 09:50:02.238538  741534 system_pods.go:89] "kube-proxy-q6xpd" [38af9a91-0e42-4afe-a310-7858c5e1b946] Running
	I1206 09:50:02.238543  741534 system_pods.go:89] "kube-scheduler-old-k8s-version-507108" [47942056-bd44-4a7f-b663-940859a025d6] Running
	I1206 09:50:02.238555  741534 system_pods.go:89] "storage-provisioner" [4cb0587e-58b5-46f1-80da-ca3de4441ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:50:02.238582  741534 retry.go:31] will retry after 246.480145ms: missing components: kube-dns
	I1206 09:50:02.490211  741534 system_pods.go:86] 8 kube-system pods found
	I1206 09:50:02.490247  741534 system_pods.go:89] "coredns-5dd5756b68-qvppb" [2a83cabb-a34e-496d-a9cf-f2017553b4d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:50:02.490254  741534 system_pods.go:89] "etcd-old-k8s-version-507108" [b98d28d9-0821-4c34-9323-c707d562f258] Running
	I1206 09:50:02.490260  741534 system_pods.go:89] "kindnet-pdc9w" [6a26f6a1-0a33-4271-983c-5c8b7e00efe3] Running
	I1206 09:50:02.490264  741534 system_pods.go:89] "kube-apiserver-old-k8s-version-507108" [280ccf0b-5c23-47ed-a558-ded6886ccd72] Running
	I1206 09:50:02.490268  741534 system_pods.go:89] "kube-controller-manager-old-k8s-version-507108" [392e4a3f-5f64-45c1-91cd-497784c58953] Running
	I1206 09:50:02.490280  741534 system_pods.go:89] "kube-proxy-q6xpd" [38af9a91-0e42-4afe-a310-7858c5e1b946] Running
	I1206 09:50:02.490285  741534 system_pods.go:89] "kube-scheduler-old-k8s-version-507108" [47942056-bd44-4a7f-b663-940859a025d6] Running
	I1206 09:50:02.490295  741534 system_pods.go:89] "storage-provisioner" [4cb0587e-58b5-46f1-80da-ca3de4441ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:50:02.490325  741534 retry.go:31] will retry after 444.937938ms: missing components: kube-dns
	I1206 09:50:02.940433  741534 system_pods.go:86] 8 kube-system pods found
	I1206 09:50:02.940482  741534 system_pods.go:89] "coredns-5dd5756b68-qvppb" [2a83cabb-a34e-496d-a9cf-f2017553b4d4] Running
	I1206 09:50:02.940488  741534 system_pods.go:89] "etcd-old-k8s-version-507108" [b98d28d9-0821-4c34-9323-c707d562f258] Running
	I1206 09:50:02.940491  741534 system_pods.go:89] "kindnet-pdc9w" [6a26f6a1-0a33-4271-983c-5c8b7e00efe3] Running
	I1206 09:50:02.940495  741534 system_pods.go:89] "kube-apiserver-old-k8s-version-507108" [280ccf0b-5c23-47ed-a558-ded6886ccd72] Running
	I1206 09:50:02.940501  741534 system_pods.go:89] "kube-controller-manager-old-k8s-version-507108" [392e4a3f-5f64-45c1-91cd-497784c58953] Running
	I1206 09:50:02.940504  741534 system_pods.go:89] "kube-proxy-q6xpd" [38af9a91-0e42-4afe-a310-7858c5e1b946] Running
	I1206 09:50:02.940508  741534 system_pods.go:89] "kube-scheduler-old-k8s-version-507108" [47942056-bd44-4a7f-b663-940859a025d6] Running
	I1206 09:50:02.940521  741534 system_pods.go:89] "storage-provisioner" [4cb0587e-58b5-46f1-80da-ca3de4441ae4] Running
	I1206 09:50:02.940532  741534 system_pods.go:126] duration metric: took 972.814167ms to wait for k8s-apps to be running ...
	I1206 09:50:02.940540  741534 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:50:02.940588  741534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:50:02.955990  741534 system_svc.go:56] duration metric: took 15.430486ms WaitForService to wait for kubelet
	I1206 09:50:02.956026  741534 kubeadm.go:587] duration metric: took 14.370924746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:50:02.956051  741534 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:50:02.959048  741534 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:50:02.959073  741534 node_conditions.go:123] node cpu capacity is 8
	I1206 09:50:02.959090  741534 node_conditions.go:105] duration metric: took 3.033492ms to run NodePressure ...
	I1206 09:50:02.959102  741534 start.go:242] waiting for startup goroutines ...
	I1206 09:50:02.959109  741534 start.go:247] waiting for cluster config update ...
	I1206 09:50:02.959119  741534 start.go:256] writing updated cluster config ...
	I1206 09:50:02.959406  741534 ssh_runner.go:195] Run: rm -f paused
	I1206 09:50:02.963512  741534 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:50:02.968158  741534 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-qvppb" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:02.972494  741534 pod_ready.go:94] pod "coredns-5dd5756b68-qvppb" is "Ready"
	I1206 09:50:02.972513  741534 pod_ready.go:86] duration metric: took 4.332335ms for pod "coredns-5dd5756b68-qvppb" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:02.975035  741534 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:02.978879  741534 pod_ready.go:94] pod "etcd-old-k8s-version-507108" is "Ready"
	I1206 09:50:02.978898  741534 pod_ready.go:86] duration metric: took 3.843071ms for pod "etcd-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:02.981544  741534 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:02.985278  741534 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-507108" is "Ready"
	I1206 09:50:02.985301  741534 pod_ready.go:86] duration metric: took 3.737272ms for pod "kube-apiserver-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:02.987751  741534 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:03.368307  741534 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-507108" is "Ready"
	I1206 09:50:03.368334  741534 pod_ready.go:86] duration metric: took 380.563247ms for pod "kube-controller-manager-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:03.568541  741534 pod_ready.go:83] waiting for pod "kube-proxy-q6xpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:03.969019  741534 pod_ready.go:94] pod "kube-proxy-q6xpd" is "Ready"
	I1206 09:50:03.969050  741534 pod_ready.go:86] duration metric: took 400.485352ms for pod "kube-proxy-q6xpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:04.168880  741534 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:04.568062  741534 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-507108" is "Ready"
	I1206 09:50:04.568095  741534 pod_ready.go:86] duration metric: took 399.188111ms for pod "kube-scheduler-old-k8s-version-507108" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:50:04.568109  741534 pod_ready.go:40] duration metric: took 1.604564104s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:50:04.613127  741534 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1206 09:50:04.614689  741534 out.go:203] 
	W1206 09:50:04.615804  741534 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1206 09:50:04.616879  741534 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1206 09:50:04.618155  741534 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-507108" cluster and "default" namespace by default
	I1206 09:50:00.677002  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:50:00.677039  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:03.217094  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:50:03.217580  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:50:03.217645  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:50:03.217717  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:50:03.253113  725997 cri.go:89] found id: "cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:50:03.253142  725997 cri.go:89] found id: ""
	I1206 09:50:03.253153  725997 logs.go:282] 1 containers: [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69]
	I1206 09:50:03.253216  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:03.257228  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:50:03.257302  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:03.291922  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:03.291945  725997 cri.go:89] found id: ""
	I1206 09:50:03.291955  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:50:03.292019  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:03.295964  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:03.296030  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:03.329928  725997 cri.go:89] found id: ""
	I1206 09:50:03.329961  725997 logs.go:282] 0 containers: []
	W1206 09:50:03.329972  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:50:03.329982  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:03.330057  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:03.364789  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:03.364818  725997 cri.go:89] found id: ""
	I1206 09:50:03.364828  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:50:03.364881  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:03.369182  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:03.369257  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:03.404651  725997 cri.go:89] found id: ""
	I1206 09:50:03.404674  725997 logs.go:282] 0 containers: []
	W1206 09:50:03.404683  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:03.404689  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:03.404739  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:50:03.438331  725997 cri.go:89] found id: "63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:03.438367  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:50:03.438373  725997 cri.go:89] found id: ""
	I1206 09:50:03.438386  725997 logs.go:282] 2 containers: [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:50:03.438440  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:03.442272  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:03.445669  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:03.445718  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:03.479224  725997 cri.go:89] found id: ""
	I1206 09:50:03.479254  725997 logs.go:282] 0 containers: []
	W1206 09:50:03.479265  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:03.479275  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:03.479328  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:03.515023  725997 cri.go:89] found id: ""
	I1206 09:50:03.515047  725997 logs.go:282] 0 containers: []
	W1206 09:50:03.515055  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:03.515073  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:03.515085  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:03.598176  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:03.598212  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:03.618572  725997 logs.go:123] Gathering logs for kube-apiserver [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69] ...
	I1206 09:50:03.618600  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:50:03.656381  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:03.656409  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:50:03.695531  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:03.695562  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:03.755428  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:03.755448  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:50:03.755490  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:03.790492  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:50:03.790526  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:03.856829  725997 logs.go:123] Gathering logs for kube-controller-manager [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a] ...
	I1206 09:50:03.856864  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:03.892648  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:50:03.892676  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:50:03.927014  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:50:03.927043  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:01.776040  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:01.776067  714616 cri.go:89] found id: ""
	I1206 09:50:01.776078  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:50:01.776147  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:01.780517  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:01.780582  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:01.809882  714616 cri.go:89] found id: ""
	I1206 09:50:01.809911  714616 logs.go:282] 0 containers: []
	W1206 09:50:01.809920  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:01.809926  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:01.809990  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:01.838285  714616 cri.go:89] found id: ""
	I1206 09:50:01.838313  714616 logs.go:282] 0 containers: []
	W1206 09:50:01.838325  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:01.838339  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:01.838369  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:01.906229  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:01.906261  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:50:01.906278  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:01.938639  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:50:01.938671  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:01.970251  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:50:01.970276  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:01.999742  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:01.999772  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:50:02.053514  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:50:02.053552  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:02.086608  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:02.086641  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:02.168518  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:02.168555  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:04.690666  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:50:04.691035  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:50:04.691094  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:50:04.691156  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:50:04.723290  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:04.723318  714616 cri.go:89] found id: ""
	I1206 09:50:04.723328  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:50:04.723390  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:04.727773  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:50:04.727842  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:04.756223  714616 cri.go:89] found id: ""
	I1206 09:50:04.756256  714616 logs.go:282] 0 containers: []
	W1206 09:50:04.756266  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:50:04.756274  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:04.756336  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:04.790127  714616 cri.go:89] found id: ""
	I1206 09:50:04.790153  714616 logs.go:282] 0 containers: []
	W1206 09:50:04.790163  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:50:04.790171  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:04.790240  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:04.819082  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:04.819106  714616 cri.go:89] found id: ""
	I1206 09:50:04.819114  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:50:04.819167  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:04.823325  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:04.823389  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:04.850411  714616 cri.go:89] found id: ""
	I1206 09:50:04.850435  714616 logs.go:282] 0 containers: []
	W1206 09:50:04.850444  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:04.850449  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:04.850535  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:50:04.879050  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:04.879074  714616 cri.go:89] found id: ""
	I1206 09:50:04.879083  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:50:04.879141  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:04.883206  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:04.883262  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:04.911445  714616 cri.go:89] found id: ""
	I1206 09:50:04.911505  714616 logs.go:282] 0 containers: []
	W1206 09:50:04.911516  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:04.911524  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:04.911589  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:04.939295  714616 cri.go:89] found id: ""
	I1206 09:50:04.939318  714616 logs.go:282] 0 containers: []
	W1206 09:50:04.939331  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:04.939342  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:04.939358  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:05.022311  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:05.022347  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:05.042977  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:05.043015  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:05.102257  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:05.102280  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:50:05.102297  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:05.135135  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:50:05.135165  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:05.161267  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:50:05.161294  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:05.187381  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:05.187407  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:50:05.244055  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:50:05.244088  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:06.468017  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:50:06.468509  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:50:06.468576  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:50:06.468648  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:50:06.503826  725997 cri.go:89] found id: "cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:50:06.503847  725997 cri.go:89] found id: ""
	I1206 09:50:06.503855  725997 logs.go:282] 1 containers: [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69]
	I1206 09:50:06.503906  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:06.507819  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:50:06.507886  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:06.541297  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:06.541318  725997 cri.go:89] found id: ""
	I1206 09:50:06.541326  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:50:06.541381  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:06.545196  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:06.545252  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:06.579223  725997 cri.go:89] found id: ""
	I1206 09:50:06.579252  725997 logs.go:282] 0 containers: []
	W1206 09:50:06.579268  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:50:06.579275  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:06.579322  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:06.613842  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:06.613875  725997 cri.go:89] found id: ""
	I1206 09:50:06.613886  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:50:06.613946  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:06.617782  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:06.617841  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:06.651976  725997 cri.go:89] found id: ""
	I1206 09:50:06.652000  725997 logs.go:282] 0 containers: []
	W1206 09:50:06.652008  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:06.652014  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:06.652062  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:50:06.686998  725997 cri.go:89] found id: "63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:06.687023  725997 cri.go:89] found id: "27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:50:06.687028  725997 cri.go:89] found id: ""
	I1206 09:50:06.687035  725997 logs.go:282] 2 containers: [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03]
	I1206 09:50:06.687100  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:06.691187  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:06.694903  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:06.694964  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:06.728332  725997 cri.go:89] found id: ""
	I1206 09:50:06.728354  725997 logs.go:282] 0 containers: []
	W1206 09:50:06.728361  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:06.728367  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:06.728416  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:06.762695  725997 cri.go:89] found id: ""
	I1206 09:50:06.762722  725997 logs.go:282] 0 containers: []
	W1206 09:50:06.762730  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:06.762750  725997 logs.go:123] Gathering logs for kube-controller-manager [27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03] ...
	I1206 09:50:06.762767  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3f0f8e0bc0460eca0bda3184d6d3434e5673392b7d1c4b10c87b0d8927a03"
	I1206 09:50:06.798303  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:50:06.798330  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:06.836620  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:06.836690  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:06.920312  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:06.920350  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:06.981531  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:06.981554  725997 logs.go:123] Gathering logs for kube-apiserver [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69] ...
	I1206 09:50:06.981567  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:50:07.019799  725997 logs.go:123] Gathering logs for kube-controller-manager [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a] ...
	I1206 09:50:07.019826  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:07.054567  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:07.054593  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:50:07.092744  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:07.092772  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:07.113745  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:50:07.113771  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:07.148629  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:50:07.148661  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:09.715537  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:50:09.716032  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:50:09.716096  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:50:09.716152  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:50:09.751133  725997 cri.go:89] found id: "cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:50:09.751160  725997 cri.go:89] found id: ""
	I1206 09:50:09.751170  725997 logs.go:282] 1 containers: [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69]
	I1206 09:50:09.751220  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:09.755430  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:50:09.755516  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:09.791025  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:09.791048  725997 cri.go:89] found id: ""
	I1206 09:50:09.791058  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:50:09.791119  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:09.794835  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:09.794895  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:09.828819  725997 cri.go:89] found id: ""
	I1206 09:50:09.828842  725997 logs.go:282] 0 containers: []
	W1206 09:50:09.828853  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:50:09.828861  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:09.828929  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:09.862847  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:09.862868  725997 cri.go:89] found id: ""
	I1206 09:50:09.862876  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:50:09.862927  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:09.866629  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:09.866699  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:09.901895  725997 cri.go:89] found id: ""
	I1206 09:50:09.901916  725997 logs.go:282] 0 containers: []
	W1206 09:50:09.901924  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:09.901930  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:09.901981  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:50:09.935535  725997 cri.go:89] found id: "63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:09.935559  725997 cri.go:89] found id: ""
	I1206 09:50:09.935569  725997 logs.go:282] 1 containers: [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a]
	I1206 09:50:09.935636  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:50:09.939356  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:09.939412  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:09.973931  725997 cri.go:89] found id: ""
	I1206 09:50:09.973953  725997 logs.go:282] 0 containers: []
	W1206 09:50:09.973961  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:09.973967  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:09.974024  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:10.008672  725997 cri.go:89] found id: ""
	I1206 09:50:10.008700  725997 logs.go:282] 0 containers: []
	W1206 09:50:10.008711  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:10.008729  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:10.008745  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:50:10.045937  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:10.045969  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:10.131049  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:10.131085  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:10.189990  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:10.190026  725997 logs.go:123] Gathering logs for kube-apiserver [cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69] ...
	I1206 09:50:10.190044  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf69b42e8c91821cf4447f7bc93a291a1cd681928dc9e855c8bebad57e8f2c69"
	I1206 09:50:10.227008  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:50:10.227045  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:50:10.294091  725997 logs.go:123] Gathering logs for kube-controller-manager [63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a] ...
	I1206 09:50:10.294131  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ffb2db91df9e4e92d4b2cc2fb8e21b8e64a2807a985216bb0c4e47400c1d7a"
	I1206 09:50:10.330712  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:50:10.330740  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:10.369167  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:10.369204  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:10.389559  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:50:10.389593  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:50:07.776169  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:50:07.776722  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:50:07.776793  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:50:07.776857  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:50:07.805130  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:07.805156  714616 cri.go:89] found id: ""
	I1206 09:50:07.805166  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:50:07.805215  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:07.809359  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:50:07.809427  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:07.836285  714616 cri.go:89] found id: ""
	I1206 09:50:07.836308  714616 logs.go:282] 0 containers: []
	W1206 09:50:07.836317  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:50:07.836324  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:07.836379  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:07.862743  714616 cri.go:89] found id: ""
	I1206 09:50:07.862765  714616 logs.go:282] 0 containers: []
	W1206 09:50:07.862773  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:50:07.862778  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:07.862834  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:07.890044  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:07.890063  714616 cri.go:89] found id: ""
	I1206 09:50:07.890072  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:50:07.890119  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:07.893999  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:07.894065  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:07.920944  714616 cri.go:89] found id: ""
	I1206 09:50:07.920967  714616 logs.go:282] 0 containers: []
	W1206 09:50:07.920975  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:07.920983  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:07.921040  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:50:07.948120  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:07.948139  714616 cri.go:89] found id: ""
	I1206 09:50:07.948147  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:50:07.948203  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:07.952144  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:07.952205  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:07.980149  714616 cri.go:89] found id: ""
	I1206 09:50:07.980172  714616 logs.go:282] 0 containers: []
	W1206 09:50:07.980180  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:07.980186  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:07.980245  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:08.009516  714616 cri.go:89] found id: ""
	I1206 09:50:08.009542  714616 logs.go:282] 0 containers: []
	W1206 09:50:08.009550  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:08.009559  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:08.009571  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:08.067317  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:08.067337  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:50:08.067355  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:08.097251  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:50:08.097281  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:08.125817  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:50:08.125843  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:08.152815  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:08.152844  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:50:08.209780  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:50:08.209814  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:08.241572  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:08.241604  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:08.328506  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:08.328540  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:10.847985  714616 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:50:10.848469  714616 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:50:10.848540  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:50:10.848602  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:50:10.878654  714616 cri.go:89] found id: "97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	I1206 09:50:10.878680  714616 cri.go:89] found id: ""
	I1206 09:50:10.878689  714616 logs.go:282] 1 containers: [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a]
	I1206 09:50:10.878747  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:10.882831  714616 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:50:10.882899  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:50:10.909635  714616 cri.go:89] found id: ""
	I1206 09:50:10.909662  714616 logs.go:282] 0 containers: []
	W1206 09:50:10.909670  714616 logs.go:284] No container was found matching "etcd"
	I1206 09:50:10.909676  714616 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:50:10.909728  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:50:10.938005  714616 cri.go:89] found id: ""
	I1206 09:50:10.938029  714616 logs.go:282] 0 containers: []
	W1206 09:50:10.938037  714616 logs.go:284] No container was found matching "coredns"
	I1206 09:50:10.938050  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:50:10.938105  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:50:10.966340  714616 cri.go:89] found id: "a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:10.966366  714616 cri.go:89] found id: ""
	I1206 09:50:10.966375  714616 logs.go:282] 1 containers: [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb]
	I1206 09:50:10.966422  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:10.970465  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:50:10.970534  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:50:10.997774  714616 cri.go:89] found id: ""
	I1206 09:50:10.997798  714616 logs.go:282] 0 containers: []
	W1206 09:50:10.997806  714616 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:50:10.997812  714616 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:50:10.997859  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:50:11.026048  714616 cri.go:89] found id: "59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:11.026069  714616 cri.go:89] found id: ""
	I1206 09:50:11.026077  714616 logs.go:282] 1 containers: [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1]
	I1206 09:50:11.026125  714616 ssh_runner.go:195] Run: which crictl
	I1206 09:50:11.030257  714616 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:50:11.030325  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:50:11.058709  714616 cri.go:89] found id: ""
	I1206 09:50:11.058735  714616 logs.go:282] 0 containers: []
	W1206 09:50:11.058743  714616 logs.go:284] No container was found matching "kindnet"
	I1206 09:50:11.058751  714616 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:50:11.058810  714616 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:50:11.087373  714616 cri.go:89] found id: ""
	I1206 09:50:11.087400  714616 logs.go:282] 0 containers: []
	W1206 09:50:11.087408  714616 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:50:11.087419  714616 logs.go:123] Gathering logs for kube-scheduler [a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb] ...
	I1206 09:50:11.087432  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a41f820d2cb0c3dc1d6060ffbe176467d7eba45ea7d2923555c198e4a9deeddb"
	I1206 09:50:11.114932  714616 logs.go:123] Gathering logs for kube-controller-manager [59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1] ...
	I1206 09:50:11.114960  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59ce079d50270f512594a837d855c6cce6bfe5d933f338f9039517c718e82da1"
	I1206 09:50:11.143403  714616 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:50:11.143432  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:50:11.197084  714616 logs.go:123] Gathering logs for container status ...
	I1206 09:50:11.197123  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:50:11.229836  714616 logs.go:123] Gathering logs for kubelet ...
	I1206 09:50:11.229866  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:50:11.316704  714616 logs.go:123] Gathering logs for dmesg ...
	I1206 09:50:11.316755  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:50:11.336956  714616 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:50:11.336990  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:50:11.396129  714616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:50:11.396159  714616 logs.go:123] Gathering logs for kube-apiserver [97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a] ...
	I1206 09:50:11.396175  714616 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97fb1fc2be35cb68151c5555ff1af3265e9f26e509512e7bc1158d4597ae096a"
	
	
	==> CRI-O <==
	Dec 06 09:50:02 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:02.232991492Z" level=info msg="Started container" PID=2192 containerID=ea250ca70217a852dff7612d02c4e0c4e67a79f73a33afaa755d77a8885a4f62 description=kube-system/coredns-5dd5756b68-qvppb/coredns id=e9db23a2-5e9a-43c4-8b6b-8385eaccf96a name=/runtime.v1.RuntimeService/StartContainer sandboxID=453712a08aab289e162761119191074e147eca3be849c0d5d1ae422a4872cf0f
	Dec 06 09:50:02 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:02.234235531Z" level=info msg="Started container" PID=2191 containerID=e057e14b4dbe28c1e93c5aa860eedeade4c5b35db58889c9fe7efeda52fe83c0 description=kube-system/storage-provisioner/storage-provisioner id=b39ea4fd-e328-4ea7-af6a-ebdab6807dc7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3375a221589f2af2d651c3b8e4a400a1390e4ad770abb37c6260baffa6b3685a
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.079193686Z" level=info msg="Running pod sandbox: default/busybox/POD" id=22002e07-2506-43cc-8d51-cf4918f5ff66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.079270018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.083878686Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0ac83a036cec3155cd2b2018649f2cd4f5b33f30a6d52acf6b870d1e6743a5b0 UID:112fc604-7ed8-485f-a853-ab890836965e NetNS:/var/run/netns/2451d9ea-98e4-4352-abdc-13fbb3ae3a70 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059a710}] Aliases:map[]}"
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.083907079Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.093879457Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0ac83a036cec3155cd2b2018649f2cd4f5b33f30a6d52acf6b870d1e6743a5b0 UID:112fc604-7ed8-485f-a853-ab890836965e NetNS:/var/run/netns/2451d9ea-98e4-4352-abdc-13fbb3ae3a70 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059a710}] Aliases:map[]}"
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.094056021Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.094945784Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.095926697Z" level=info msg="Ran pod sandbox 0ac83a036cec3155cd2b2018649f2cd4f5b33f30a6d52acf6b870d1e6743a5b0 with infra container: default/busybox/POD" id=22002e07-2506-43cc-8d51-cf4918f5ff66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.097507243Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f78ec835-4e83-4725-8174-54168182b5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.097627474Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f78ec835-4e83-4725-8174-54168182b5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.097662809Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f78ec835-4e83-4725-8174-54168182b5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.098275347Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c399e6b3-7831-4341-ac14-34f82eb77b24 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:50:05 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:05.102145348Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.54312062Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c399e6b3-7831-4341-ac14-34f82eb77b24 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.544011273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=561126e7-7dc4-4392-81fe-80199a06edc5 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.545486793Z" level=info msg="Creating container: default/busybox/busybox" id=b0d71967-c0dc-4fbb-9449-2ac018d90da4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.545590415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.549108742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.549528372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.579577947Z" level=info msg="Created container 0420ffe9c313c8ffcf354f7e331f378d11aebc93a1d4f2968866e5c19d3793fa: default/busybox/busybox" id=b0d71967-c0dc-4fbb-9449-2ac018d90da4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.580150207Z" level=info msg="Starting container: 0420ffe9c313c8ffcf354f7e331f378d11aebc93a1d4f2968866e5c19d3793fa" id=f93ed56d-31b9-4ba2-b43c-5a76357cd689 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:50:07 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:07.581888659Z" level=info msg="Started container" PID=2270 containerID=0420ffe9c313c8ffcf354f7e331f378d11aebc93a1d4f2968866e5c19d3793fa description=default/busybox/busybox id=f93ed56d-31b9-4ba2-b43c-5a76357cd689 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ac83a036cec3155cd2b2018649f2cd4f5b33f30a6d52acf6b870d1e6743a5b0
	Dec 06 09:50:13 old-k8s-version-507108 crio[779]: time="2025-12-06T09:50:13.855075353Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	0420ffe9c313c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   0ac83a036cec3       busybox                                          default
	ea250ca70217a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   453712a08aab2       coredns-5dd5756b68-qvppb                         kube-system
	e057e14b4dbe2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   3375a221589f2       storage-provisioner                              kube-system
	31d722f85c21e       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   5dce74b54af3d       kindnet-pdc9w                                    kube-system
	a03cdb018c0e8       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   bed6bbc093e38       kube-proxy-q6xpd                                 kube-system
	ab6df6a319960       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   594648a4a0cc3       etcd-old-k8s-version-507108                      kube-system
	da6203088b163       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   4a179f5d2516e       kube-scheduler-old-k8s-version-507108            kube-system
	f61ea0f11e4ec       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   05f7b23f15be7       kube-controller-manager-old-k8s-version-507108   kube-system
	1e96820fc131b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   dc23b558a2b22       kube-apiserver-old-k8s-version-507108            kube-system
	
	
	==> coredns [ea250ca70217a852dff7612d02c4e0c4e67a79f73a33afaa755d77a8885a4f62] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47983 - 17629 "HINFO IN 8708432046434539397.4071886491148180761. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034028161s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-507108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-507108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=old-k8s-version-507108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_49_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:49:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-507108
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:50:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:50:06 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:50:06 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:50:06 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:50:06 +0000   Sat, 06 Dec 2025 09:50:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-507108
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                9c098f98-750a-4e92-a2e1-303c4ddd2d10
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-qvppb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-507108                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-pdc9w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-507108             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-507108    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-q6xpd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-507108             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-507108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-507108 event: Registered Node old-k8s-version-507108 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-507108 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [ab6df6a319960859d0db3b467e9fb68d5206f68c2d57aee7f2169c737feafe14] <==
	{"level":"info","ts":"2025-12-06T09:49:31.169739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-06T09:49:31.16992Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-06T09:49:31.173145Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-06T09:49:31.173405Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T09:49:31.173428Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T09:49:31.173439Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-06T09:49:31.173479Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-06T09:49:31.758947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-06T09:49:31.758991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-06T09:49:31.759006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-06T09:49:31.759018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-06T09:49:31.759023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-06T09:49:31.759031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-06T09:49:31.759038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-06T09:49:31.759695Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:49:31.760319Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-507108 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-06T09:49:31.760318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:49:31.760345Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:49:31.760442Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:49:31.760588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:49:31.760588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-06T09:49:31.760631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-06T09:49:31.76063Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:49:31.761612Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-06T09:49:31.761635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:50:15 up  2:32,  0 user,  load average: 2.28, 2.32, 3.16
	Linux old-k8s-version-507108 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31d722f85c21ee24dbc27f7b3a34cf88110c3450c93c0a0d5741c3c3568b41e4] <==
	I1206 09:49:51.446104       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:49:51.446489       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:49:51.446609       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:49:51.446623       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:49:51.446643       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:49:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:49:51.738552       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:49:51.738595       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:49:51.738608       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:49:51.738792       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:49:51.973928       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:49:51.973963       1 metrics.go:72] Registering metrics
	I1206 09:49:51.974034       1 controller.go:711] "Syncing nftables rules"
	I1206 09:50:01.648277       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:50:01.648344       1 main.go:301] handling current node
	I1206 09:50:11.647162       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:50:11.647191       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e96820fc131b0eeefffd6d9e932adc16d2bb55bac2a326c3cedc6f35ccb7577] <==
	I1206 09:49:32.774380       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1206 09:49:32.774415       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:49:32.774435       1 aggregator.go:166] initial CRD sync complete...
	I1206 09:49:32.774442       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 09:49:32.774449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:49:32.774489       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:49:32.776314       1 controller.go:624] quota admission added evaluator for: namespaces
	E1206 09:49:32.780526       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1206 09:49:32.816412       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1206 09:49:32.983755       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:49:33.678507       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:49:33.682814       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:49:33.682839       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:49:34.094981       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:49:34.133313       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:49:34.184752       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:49:34.192497       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1206 09:49:34.193650       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:49:34.198089       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:49:34.715595       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 09:49:35.534916       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 09:49:35.543750       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:49:35.554083       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1206 09:49:48.222070       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1206 09:49:48.471328       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f61ea0f11e4ecc89fbedb57643c5648113b49065c64ccc88923ed53a489b8ec9] <==
	I1206 09:49:47.744133       1 shared_informer.go:318] Caches are synced for stateful set
	I1206 09:49:47.772696       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:49:48.085614       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:49:48.129564       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:49:48.129592       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1206 09:49:48.225555       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1206 09:49:48.478938       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pdc9w"
	I1206 09:49:48.480105       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q6xpd"
	I1206 09:49:48.575781       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pbn84"
	I1206 09:49:48.580944       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qvppb"
	I1206 09:49:48.586536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="361.060325ms"
	I1206 09:49:48.594869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.279618ms"
	I1206 09:49:48.608829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.898778ms"
	I1206 09:49:48.608949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.38µs"
	I1206 09:49:48.962989       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1206 09:49:48.970358       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pbn84"
	I1206 09:49:48.976816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.897482ms"
	I1206 09:49:48.982619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.752678ms"
	I1206 09:49:48.982762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.745µs"
	I1206 09:50:01.882594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.674µs"
	I1206 09:50:01.902086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.532µs"
	I1206 09:50:02.666081       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1206 09:50:02.699696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.745µs"
	I1206 09:50:02.726338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.053636ms"
	I1206 09:50:02.726533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="150.098µs"
	
	
	==> kube-proxy [a03cdb018c0e8fbb87431743634411743ae625d137fc49366393fae971ffdbbf] <==
	I1206 09:49:48.900203       1 server_others.go:69] "Using iptables proxy"
	I1206 09:49:48.911076       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1206 09:49:48.938278       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:49:48.941243       1 server_others.go:152] "Using iptables Proxier"
	I1206 09:49:48.941331       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 09:49:48.941361       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 09:49:48.941396       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 09:49:48.942122       1 server.go:846] "Version info" version="v1.28.0"
	I1206 09:49:48.942212       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:49:48.943176       1 config.go:188] "Starting service config controller"
	I1206 09:49:48.943220       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 09:49:48.943524       1 config.go:315] "Starting node config controller"
	I1206 09:49:48.943687       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 09:49:48.944243       1 config.go:97] "Starting endpoint slice config controller"
	I1206 09:49:48.944279       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 09:49:49.043653       1 shared_informer.go:318] Caches are synced for service config
	I1206 09:49:49.044805       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 09:49:49.044818       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [da6203088b163ee41565018130093554957f0ac9c8177f176bfc85b95e21df38] <==
	E1206 09:49:32.736277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 09:49:32.736249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 09:49:32.736361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 09:49:32.736431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1206 09:49:32.736490       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 09:49:32.736512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 09:49:33.573746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 09:49:33.573776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 09:49:33.575253       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 09:49:33.575277       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:49:33.616063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 09:49:33.616107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 09:49:33.634676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 09:49:33.634706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 09:49:33.694418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 09:49:33.694452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 09:49:33.703939       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 09:49:33.703976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1206 09:49:33.862359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 09:49:33.862397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 09:49:33.869755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 09:49:33.869791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1206 09:49:33.941607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 09:49:33.941647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1206 09:49:35.333360       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 06 09:49:47 old-k8s-version-507108 kubelet[1420]: I1206 09:49:47.556442    1420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.484214    1420 topology_manager.go:215] "Topology Admit Handler" podUID="6a26f6a1-0a33-4271-983c-5c8b7e00efe3" podNamespace="kube-system" podName="kindnet-pdc9w"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.485144    1420 topology_manager.go:215] "Topology Admit Handler" podUID="38af9a91-0e42-4afe-a310-7858c5e1b946" podNamespace="kube-system" podName="kube-proxy-q6xpd"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.583869    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a26f6a1-0a33-4271-983c-5c8b7e00efe3-xtables-lock\") pod \"kindnet-pdc9w\" (UID: \"6a26f6a1-0a33-4271-983c-5c8b7e00efe3\") " pod="kube-system/kindnet-pdc9w"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.583918    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a26f6a1-0a33-4271-983c-5c8b7e00efe3-lib-modules\") pod \"kindnet-pdc9w\" (UID: \"6a26f6a1-0a33-4271-983c-5c8b7e00efe3\") " pod="kube-system/kindnet-pdc9w"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.583951    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26lvv\" (UniqueName: \"kubernetes.io/projected/6a26f6a1-0a33-4271-983c-5c8b7e00efe3-kube-api-access-26lvv\") pod \"kindnet-pdc9w\" (UID: \"6a26f6a1-0a33-4271-983c-5c8b7e00efe3\") " pod="kube-system/kindnet-pdc9w"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.583978    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38af9a91-0e42-4afe-a310-7858c5e1b946-xtables-lock\") pod \"kube-proxy-q6xpd\" (UID: \"38af9a91-0e42-4afe-a310-7858c5e1b946\") " pod="kube-system/kube-proxy-q6xpd"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.584008    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hz8\" (UniqueName: \"kubernetes.io/projected/38af9a91-0e42-4afe-a310-7858c5e1b946-kube-api-access-54hz8\") pod \"kube-proxy-q6xpd\" (UID: \"38af9a91-0e42-4afe-a310-7858c5e1b946\") " pod="kube-system/kube-proxy-q6xpd"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.584037    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a26f6a1-0a33-4271-983c-5c8b7e00efe3-cni-cfg\") pod \"kindnet-pdc9w\" (UID: \"6a26f6a1-0a33-4271-983c-5c8b7e00efe3\") " pod="kube-system/kindnet-pdc9w"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.584142    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38af9a91-0e42-4afe-a310-7858c5e1b946-kube-proxy\") pod \"kube-proxy-q6xpd\" (UID: \"38af9a91-0e42-4afe-a310-7858c5e1b946\") " pod="kube-system/kube-proxy-q6xpd"
	Dec 06 09:49:48 old-k8s-version-507108 kubelet[1420]: I1206 09:49:48.584268    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38af9a91-0e42-4afe-a310-7858c5e1b946-lib-modules\") pod \"kube-proxy-q6xpd\" (UID: \"38af9a91-0e42-4afe-a310-7858c5e1b946\") " pod="kube-system/kube-proxy-q6xpd"
	Dec 06 09:49:49 old-k8s-version-507108 kubelet[1420]: I1206 09:49:49.659417    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q6xpd" podStartSLOduration=1.659362475 podCreationTimestamp="2025-12-06 09:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:49:49.659077672 +0000 UTC m=+14.151447635" watchObservedRunningTime="2025-12-06 09:49:49.659362475 +0000 UTC m=+14.151732441"
	Dec 06 09:49:51 old-k8s-version-507108 kubelet[1420]: I1206 09:49:51.662975    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-pdc9w" podStartSLOduration=1.2517808590000001 podCreationTimestamp="2025-12-06 09:49:48 +0000 UTC" firstStartedPulling="2025-12-06 09:49:48.794996153 +0000 UTC m=+13.287366103" lastFinishedPulling="2025-12-06 09:49:51.206145096 +0000 UTC m=+15.698515051" observedRunningTime="2025-12-06 09:49:51.662896073 +0000 UTC m=+16.155266036" watchObservedRunningTime="2025-12-06 09:49:51.662929807 +0000 UTC m=+16.155299773"
	Dec 06 09:50:01 old-k8s-version-507108 kubelet[1420]: I1206 09:50:01.858430    1420 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 06 09:50:01 old-k8s-version-507108 kubelet[1420]: I1206 09:50:01.881045    1420 topology_manager.go:215] "Topology Admit Handler" podUID="4cb0587e-58b5-46f1-80da-ca3de4441ae4" podNamespace="kube-system" podName="storage-provisioner"
	Dec 06 09:50:01 old-k8s-version-507108 kubelet[1420]: I1206 09:50:01.882607    1420 topology_manager.go:215] "Topology Admit Handler" podUID="2a83cabb-a34e-496d-a9cf-f2017553b4d4" podNamespace="kube-system" podName="coredns-5dd5756b68-qvppb"
	Dec 06 09:50:01 old-k8s-version-507108 kubelet[1420]: I1206 09:50:01.979273    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a83cabb-a34e-496d-a9cf-f2017553b4d4-config-volume\") pod \"coredns-5dd5756b68-qvppb\" (UID: \"2a83cabb-a34e-496d-a9cf-f2017553b4d4\") " pod="kube-system/coredns-5dd5756b68-qvppb"
	Dec 06 09:50:01 old-k8s-version-507108 kubelet[1420]: I1206 09:50:01.979336    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4cb0587e-58b5-46f1-80da-ca3de4441ae4-tmp\") pod \"storage-provisioner\" (UID: \"4cb0587e-58b5-46f1-80da-ca3de4441ae4\") " pod="kube-system/storage-provisioner"
	Dec 06 09:50:01 old-k8s-version-507108 kubelet[1420]: I1206 09:50:01.979367    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ktw\" (UniqueName: \"kubernetes.io/projected/4cb0587e-58b5-46f1-80da-ca3de4441ae4-kube-api-access-c2ktw\") pod \"storage-provisioner\" (UID: \"4cb0587e-58b5-46f1-80da-ca3de4441ae4\") " pod="kube-system/storage-provisioner"
	Dec 06 09:50:01 old-k8s-version-507108 kubelet[1420]: I1206 09:50:01.979617    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xftds\" (UniqueName: \"kubernetes.io/projected/2a83cabb-a34e-496d-a9cf-f2017553b4d4-kube-api-access-xftds\") pod \"coredns-5dd5756b68-qvppb\" (UID: \"2a83cabb-a34e-496d-a9cf-f2017553b4d4\") " pod="kube-system/coredns-5dd5756b68-qvppb"
	Dec 06 09:50:02 old-k8s-version-507108 kubelet[1420]: I1206 09:50:02.710765    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.71070978 podCreationTimestamp="2025-12-06 09:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:50:02.710584823 +0000 UTC m=+27.202954786" watchObservedRunningTime="2025-12-06 09:50:02.71070978 +0000 UTC m=+27.203079742"
	Dec 06 09:50:02 old-k8s-version-507108 kubelet[1420]: I1206 09:50:02.710873    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qvppb" podStartSLOduration=14.710845917 podCreationTimestamp="2025-12-06 09:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:50:02.699580395 +0000 UTC m=+27.191950358" watchObservedRunningTime="2025-12-06 09:50:02.710845917 +0000 UTC m=+27.203215882"
	Dec 06 09:50:04 old-k8s-version-507108 kubelet[1420]: I1206 09:50:04.777145    1420 topology_manager.go:215] "Topology Admit Handler" podUID="112fc604-7ed8-485f-a853-ab890836965e" podNamespace="default" podName="busybox"
	Dec 06 09:50:04 old-k8s-version-507108 kubelet[1420]: I1206 09:50:04.795875    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4w78\" (UniqueName: \"kubernetes.io/projected/112fc604-7ed8-485f-a853-ab890836965e-kube-api-access-p4w78\") pod \"busybox\" (UID: \"112fc604-7ed8-485f-a853-ab890836965e\") " pod="default/busybox"
	Dec 06 09:50:07 old-k8s-version-507108 kubelet[1420]: I1206 09:50:07.698607    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.252937856 podCreationTimestamp="2025-12-06 09:50:04 +0000 UTC" firstStartedPulling="2025-12-06 09:50:05.097861178 +0000 UTC m=+29.590231133" lastFinishedPulling="2025-12-06 09:50:07.543485863 +0000 UTC m=+32.035855813" observedRunningTime="2025-12-06 09:50:07.698211806 +0000 UTC m=+32.190581769" watchObservedRunningTime="2025-12-06 09:50:07.698562536 +0000 UTC m=+32.190932498"
	
	
	==> storage-provisioner [e057e14b4dbe28c1e93c5aa860eedeade4c5b35db58889c9fe7efeda52fe83c0] <==
	I1206 09:50:02.248394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:50:02.258744       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:50:02.258801       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 09:50:02.265938       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:50:02.266087       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ae82bcf-d8bd-405a-ab47-eb637aa10d2b", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-507108_f105427b-ad78-4b99-b186-f2714fc6a677 became leader
	I1206 09:50:02.266150       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-507108_f105427b-ad78-4b99-b186-f2714fc6a677!
	I1206 09:50:02.366773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-507108_f105427b-ad78-4b99-b186-f2714fc6a677!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507108 -n old-k8s-version-507108
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-507108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-507108 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-507108 --alsologtostderr -v=1: exit status 80 (2.06460306s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-507108 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:51:35.828009  766627 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:51:35.828244  766627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:35.828259  766627 out.go:374] Setting ErrFile to fd 2...
	I1206 09:51:35.828265  766627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:35.828706  766627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:51:35.829007  766627 out.go:368] Setting JSON to false
	I1206 09:51:35.829034  766627 mustload.go:66] Loading cluster: old-k8s-version-507108
	I1206 09:51:35.829552  766627 config.go:182] Loaded profile config "old-k8s-version-507108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:51:35.830104  766627 cli_runner.go:164] Run: docker container inspect old-k8s-version-507108 --format={{.State.Status}}
	I1206 09:51:35.851525  766627 host.go:66] Checking if "old-k8s-version-507108" exists ...
	I1206 09:51:35.851982  766627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:51:35.929782  766627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:93 OomKillDisable:false NGoroutines:101 SystemTime:2025-12-06 09:51:35.917719985 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:51:35.930734  766627 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-507108 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:51:35.932482  766627 out.go:179] * Pausing node old-k8s-version-507108 ... 
	I1206 09:51:35.933573  766627 host.go:66] Checking if "old-k8s-version-507108" exists ...
	I1206 09:51:35.933923  766627 ssh_runner.go:195] Run: systemctl --version
	I1206 09:51:35.933991  766627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-507108
	I1206 09:51:35.955934  766627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/old-k8s-version-507108/id_rsa Username:docker}
	I1206 09:51:36.052083  766627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:51:36.092202  766627 pause.go:52] kubelet running: true
	I1206 09:51:36.092334  766627 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:51:36.311632  766627 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:51:36.311728  766627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:51:36.390256  766627 cri.go:89] found id: "42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2"
	I1206 09:51:36.390281  766627 cri.go:89] found id: "e7cdddacd3684d67407466e547704422e7b3d83504f8602a5eed09903630559d"
	I1206 09:51:36.390287  766627 cri.go:89] found id: "75357b2b778c96496d8c4298aeb32324c9a83e9955f0c8b8385c30a0381501f1"
	I1206 09:51:36.390291  766627 cri.go:89] found id: "3eee5f1d4d11750ebd0190ccbd3ec6afbf1f4658b007f40bf277f2f27891ed47"
	I1206 09:51:36.390296  766627 cri.go:89] found id: "0b68753786d00a4bb60e47f31e486200fabaaeb743cbffa572339af4be74a216"
	I1206 09:51:36.390301  766627 cri.go:89] found id: "c63bd209bc99c4f282c51f22f469fa540401634190be26711d150482c1f373d7"
	I1206 09:51:36.390305  766627 cri.go:89] found id: "c1a010de0842e99d0ec887649d5d92d06a2a5d0238c5a7ead0bc168b3d098af0"
	I1206 09:51:36.390310  766627 cri.go:89] found id: "f2a122704894d5c8a04ed9e9c7215d82df671c9a0daf6fc8d12c524573aa0fba"
	I1206 09:51:36.390313  766627 cri.go:89] found id: "0ad3331f74421cc30e54a8d7ed856cf70c19c103394bc37a9e789e115ed3c2b7"
	I1206 09:51:36.390321  766627 cri.go:89] found id: "547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824"
	I1206 09:51:36.390325  766627 cri.go:89] found id: "8cc53972a60a1088f14adc164b77fa7b466008c043e3b641e5958fa45bf8a14b"
	I1206 09:51:36.390330  766627 cri.go:89] found id: ""
	I1206 09:51:36.390373  766627 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:51:36.403508  766627 retry.go:31] will retry after 334.878797ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:51:36Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:51:36.739120  766627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:51:36.754053  766627 pause.go:52] kubelet running: false
	I1206 09:51:36.754126  766627 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:51:36.931696  766627 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:51:36.931801  766627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:51:37.004203  766627 cri.go:89] found id: "42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2"
	I1206 09:51:37.004231  766627 cri.go:89] found id: "e7cdddacd3684d67407466e547704422e7b3d83504f8602a5eed09903630559d"
	I1206 09:51:37.004238  766627 cri.go:89] found id: "75357b2b778c96496d8c4298aeb32324c9a83e9955f0c8b8385c30a0381501f1"
	I1206 09:51:37.004243  766627 cri.go:89] found id: "3eee5f1d4d11750ebd0190ccbd3ec6afbf1f4658b007f40bf277f2f27891ed47"
	I1206 09:51:37.004247  766627 cri.go:89] found id: "0b68753786d00a4bb60e47f31e486200fabaaeb743cbffa572339af4be74a216"
	I1206 09:51:37.004250  766627 cri.go:89] found id: "c63bd209bc99c4f282c51f22f469fa540401634190be26711d150482c1f373d7"
	I1206 09:51:37.004253  766627 cri.go:89] found id: "c1a010de0842e99d0ec887649d5d92d06a2a5d0238c5a7ead0bc168b3d098af0"
	I1206 09:51:37.004258  766627 cri.go:89] found id: "f2a122704894d5c8a04ed9e9c7215d82df671c9a0daf6fc8d12c524573aa0fba"
	I1206 09:51:37.004262  766627 cri.go:89] found id: "0ad3331f74421cc30e54a8d7ed856cf70c19c103394bc37a9e789e115ed3c2b7"
	I1206 09:51:37.004276  766627 cri.go:89] found id: "547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824"
	I1206 09:51:37.004282  766627 cri.go:89] found id: "8cc53972a60a1088f14adc164b77fa7b466008c043e3b641e5958fa45bf8a14b"
	I1206 09:51:37.004286  766627 cri.go:89] found id: ""
	I1206 09:51:37.004339  766627 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:51:37.017530  766627 retry.go:31] will retry after 440.565493ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:51:37Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:51:37.459204  766627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:51:37.473124  766627 pause.go:52] kubelet running: false
	I1206 09:51:37.473186  766627 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:51:37.646855  766627 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:51:37.646927  766627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:51:37.725153  766627 cri.go:89] found id: "42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2"
	I1206 09:51:37.725183  766627 cri.go:89] found id: "e7cdddacd3684d67407466e547704422e7b3d83504f8602a5eed09903630559d"
	I1206 09:51:37.725189  766627 cri.go:89] found id: "75357b2b778c96496d8c4298aeb32324c9a83e9955f0c8b8385c30a0381501f1"
	I1206 09:51:37.725195  766627 cri.go:89] found id: "3eee5f1d4d11750ebd0190ccbd3ec6afbf1f4658b007f40bf277f2f27891ed47"
	I1206 09:51:37.725200  766627 cri.go:89] found id: "0b68753786d00a4bb60e47f31e486200fabaaeb743cbffa572339af4be74a216"
	I1206 09:51:37.725205  766627 cri.go:89] found id: "c63bd209bc99c4f282c51f22f469fa540401634190be26711d150482c1f373d7"
	I1206 09:51:37.725210  766627 cri.go:89] found id: "c1a010de0842e99d0ec887649d5d92d06a2a5d0238c5a7ead0bc168b3d098af0"
	I1206 09:51:37.725214  766627 cri.go:89] found id: "f2a122704894d5c8a04ed9e9c7215d82df671c9a0daf6fc8d12c524573aa0fba"
	I1206 09:51:37.725228  766627 cri.go:89] found id: "0ad3331f74421cc30e54a8d7ed856cf70c19c103394bc37a9e789e115ed3c2b7"
	I1206 09:51:37.725252  766627 cri.go:89] found id: "547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824"
	I1206 09:51:37.725258  766627 cri.go:89] found id: "8cc53972a60a1088f14adc164b77fa7b466008c043e3b641e5958fa45bf8a14b"
	I1206 09:51:37.725262  766627 cri.go:89] found id: ""
	I1206 09:51:37.725307  766627 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:51:37.746653  766627 out.go:203] 
	W1206 09:51:37.748266  766627 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:51:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:51:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:51:37.748293  766627 out.go:285] * 
	* 
	W1206 09:51:37.755140  766627 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:51:37.775118  766627 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-507108 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-507108
helpers_test.go:243: (dbg) docker inspect old-k8s-version-507108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3",
	        "Created": "2025-12-06T09:49:19.254369634Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 753812,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:50:32.812726363Z",
	            "FinishedAt": "2025-12-06T09:50:31.882176907Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/hosts",
	        "LogPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3-json.log",
	        "Name": "/old-k8s-version-507108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-507108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-507108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3",
	                "LowerDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-507108",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-507108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-507108",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-507108",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-507108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3a6aa59344a8fac1599fe36ddaf60cf95e009ee6f0ba5a6591a56e7cf50759ff",
	            "SandboxKey": "/var/run/docker/netns/3a6aa59344a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-507108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "68b5b112ecd8d43eda4b45466a2546c01f5d267b315a697829fb79471d3e3a2b",
	                    "EndpointID": "ee3e0a5eeff9b67a8a4e937cef86f78c5df7c8558cff01c0e2efa231f2593617",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "8e:33:79:22:06:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-507108",
	                        "e36525fbfc60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108: exit status 2 (509.666633ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-507108 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-507108 logs -n 25: (1.423598543s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-983381 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo containerd config dump                                                                                                                                                                                                  │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo crio config                                                                                                                                                                                                             │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ delete  │ -p cilium-983381                                                                                                                                                                                                                              │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:49 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │                     │
	│ stop    │ -p old-k8s-version-507108 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-507108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-669264    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p cert-expiration-669264                                                                                                                                                                                                                     │ cert-expiration-669264    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-521770         │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ image   │ old-k8s-version-507108 image list --format=json                                                                                                                                                                                               │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ pause   │ -p old-k8s-version-507108 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-581224 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-581224 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:51:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:51:36.624765  766954 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:51:36.624875  766954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:36.624883  766954 out.go:374] Setting ErrFile to fd 2...
	I1206 09:51:36.624887  766954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:36.625137  766954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:51:36.625747  766954 out.go:368] Setting JSON to false
	I1206 09:51:36.627227  766954 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9241,"bootTime":1765005456,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:51:36.627302  766954 start.go:143] virtualization: kvm guest
	I1206 09:51:36.628867  766954 out.go:179] * [kubernetes-upgrade-581224] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:51:36.630222  766954 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:51:36.630245  766954 notify.go:221] Checking for updates...
	I1206 09:51:36.632077  766954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:51:36.633171  766954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:51:36.634151  766954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:51:36.635101  766954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:51:36.635978  766954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:51:36.637308  766954 config.go:182] Loaded profile config "kubernetes-upgrade-581224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:51:36.637888  766954 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:51:36.662991  766954 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:51:36.663089  766954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:51:36.729754  766954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:85 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-06 09:51:36.718716247 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:51:36.729880  766954 docker.go:319] overlay module found
	I1206 09:51:36.731392  766954 out.go:179] * Using the docker driver based on existing profile
	I1206 09:51:36.732387  766954 start.go:309] selected driver: docker
	I1206 09:51:36.732407  766954 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-581224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-581224 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:51:36.732576  766954 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:51:36.733364  766954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:51:36.800626  766954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:85 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-06 09:51:36.790429922 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:51:36.801013  766954 cni.go:84] Creating CNI manager for ""
	I1206 09:51:36.801098  766954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:51:36.801156  766954 start.go:353] cluster config:
	{Name:kubernetes-upgrade-581224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-581224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:51:36.802878  766954 out.go:179] * Starting "kubernetes-upgrade-581224" primary control-plane node in "kubernetes-upgrade-581224" cluster
	I1206 09:51:36.810504  766954 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:51:36.811720  766954 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:51:36.812729  766954 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:51:36.812768  766954 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:51:36.812778  766954 cache.go:65] Caching tarball of preloaded images
	I1206 09:51:36.812840  766954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:51:36.812874  766954 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:51:36.812885  766954 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:51:36.813015  766954 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/config.json ...
	I1206 09:51:36.841261  766954 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:51:36.841280  766954 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:51:36.841297  766954 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:51:36.841329  766954 start.go:360] acquireMachinesLock for kubernetes-upgrade-581224: {Name:mk8ad71ba73205dbcda5171bd6eed20a2901a214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:51:36.841382  766954 start.go:364] duration metric: took 35.71µs to acquireMachinesLock for "kubernetes-upgrade-581224"
	I1206 09:51:36.841410  766954 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:51:36.841415  766954 fix.go:54] fixHost starting: 
	I1206 09:51:36.841672  766954 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-581224 --format={{.State.Status}}
	I1206 09:51:36.860467  766954 fix.go:112] recreateIfNeeded on kubernetes-upgrade-581224: state=Running err=<nil>
	W1206 09:51:36.860501  766954 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:51:36.197927  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:36.697116  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:37.197172  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:37.697261  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:37.816767  760217 kubeadm.go:1114] duration metric: took 5.203365103s to wait for elevateKubeSystemPrivileges
	I1206 09:51:37.816808  760217 kubeadm.go:403] duration metric: took 13.347503099s to StartCluster
	I1206 09:51:37.816830  760217 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:37.816908  760217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:51:37.818670  760217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:37.856530  760217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:51:37.856595  760217 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:51:37.856838  760217 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:51:37.856892  760217 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:51:37.858040  760217 addons.go:70] Setting default-storageclass=true in profile "no-preload-521770"
	I1206 09:51:37.858097  760217 addons.go:70] Setting storage-provisioner=true in profile "no-preload-521770"
	I1206 09:51:37.858118  760217 addons.go:239] Setting addon storage-provisioner=true in "no-preload-521770"
	I1206 09:51:37.858187  760217 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521770"
	I1206 09:51:37.858336  760217 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:51:37.859191  760217 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:51:37.859940  760217 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:51:37.867161  760217 out.go:179] * Verifying Kubernetes components...
	I1206 09:51:37.869438  760217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:51:37.892635  760217 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Dec 06 09:51:05 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:05.194422099Z" level=info msg="Started container" PID=1736 containerID=6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper id=91f17762-1dde-41df-b5d4-deb7885fe517 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a656bc53f746466b562a416a88688b94a905a32e7c17b373e6ff726b3fa61f7b
	Dec 06 09:51:05 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:05.996298078Z" level=info msg="Removing container: f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128" id=d5e88e92-336b-4303-8ec8-8fb973f51044 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:51:06 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:06.006639758Z" level=info msg="Removed container f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=d5e88e92-336b-4303-8ec8-8fb973f51044 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.019533937Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=024005c1-88f3-43df-8dda-4897113f146a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.020561257Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a747abdb-d1ba-4472-b096-5e975bd204ac name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.021608253Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c8b08ce4-b6d0-4862-bca3-0a610bb35096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.021759814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026313499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026552249Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d840de3023678b8bdf075bd94db8b568d4b278337e1eee760cfe86302a7f5b59/merged/etc/passwd: no such file or directory"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026584529Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d840de3023678b8bdf075bd94db8b568d4b278337e1eee760cfe86302a7f5b59/merged/etc/group: no such file or directory"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026881739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.052694109Z" level=info msg="Created container 42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2: kube-system/storage-provisioner/storage-provisioner" id=c8b08ce4-b6d0-4862-bca3-0a610bb35096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.053323756Z" level=info msg="Starting container: 42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2" id=43d87ddd-e37a-4b09-b778-34147784f4da name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.055583804Z" level=info msg="Started container" PID=1750 containerID=42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2 description=kube-system/storage-provisioner/storage-provisioner id=43d87ddd-e37a-4b09-b778-34147784f4da name=/runtime.v1.RuntimeService/StartContainer sandboxID=14e580adce428b36c9132236baa7509a51b9c4497356baf599f00cd2b70bef3e
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.90483697Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7a27758b-190f-41ab-b4e5-ee9207c5633e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.90589161Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=812606f0-db87-4599-a936-b9b0358e1153 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.906927036Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=1ab84a79-e372-4e35-b891-75935b3af6c4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.907071077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.913001926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.913560924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.942620494Z" level=info msg="Created container 547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=1ab84a79-e372-4e35-b891-75935b3af6c4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.943127663Z" level=info msg="Starting container: 547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824" id=7ab79706-36ad-436d-a518-7cf9f5a64ac1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.94476337Z" level=info msg="Started container" PID=1786 containerID=547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper id=7ab79706-36ad-436d-a518-7cf9f5a64ac1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a656bc53f746466b562a416a88688b94a905a32e7c17b373e6ff726b3fa61f7b
	Dec 06 09:51:29 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:29.060675573Z" level=info msg="Removing container: 6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5" id=d6ca9d31-1359-4096-ab4b-88311ea40d0c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:51:29 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:29.070419941Z" level=info msg="Removed container 6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=d6ca9d31-1359-4096-ab4b-88311ea40d0c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	547e878010f36       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   2                   a656bc53f7464       dashboard-metrics-scraper-5f989dc9cf-pntp4       kubernetes-dashboard
	42a8d521b2204       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   14e580adce428       storage-provisioner                              kube-system
	8cc53972a60a1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   f29aab9d1c7b8       kubernetes-dashboard-8694d4445c-bfcks            kubernetes-dashboard
	e7cdddacd3684       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   da29f54d45cf6       coredns-5dd5756b68-qvppb                         kube-system
	81bb7d7fe77eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   31a7bf1d910c2       busybox                                          default
	75357b2b778c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   14e580adce428       storage-provisioner                              kube-system
	3eee5f1d4d117       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   03e226c026636       kube-proxy-q6xpd                                 kube-system
	0b68753786d00       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   199da8c3b7659       kindnet-pdc9w                                    kube-system
	c63bd209bc99c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   3c82706784dcc       kube-apiserver-old-k8s-version-507108            kube-system
	c1a010de0842e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   39b10f01f78a8       kube-controller-manager-old-k8s-version-507108   kube-system
	f2a122704894d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   de07cfa549c78       etcd-old-k8s-version-507108                      kube-system
	0ad3331f74421       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   e597ffc303f98       kube-scheduler-old-k8s-version-507108            kube-system
	
	
	==> coredns [e7cdddacd3684d67407466e547704422e7b3d83504f8602a5eed09903630559d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33903 - 46442 "HINFO IN 2567026679596900439.7854628332536336291. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031751049s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-507108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-507108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=old-k8s-version-507108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_49_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:49:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-507108
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:51:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:50:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-507108
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                9c098f98-750a-4e92-a2e1-303c4ddd2d10
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-qvppb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-507108                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-pdc9w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-507108             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-507108    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-q6xpd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-507108             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pntp4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bfcks             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-507108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-507108 event: Registered Node old-k8s-version-507108 in Controller
	  Normal  NodeReady                98s                kubelet          Node old-k8s-version-507108 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 61s)  kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 61s)  kubelet          Node old-k8s-version-507108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 61s)  kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-507108 event: Registered Node old-k8s-version-507108 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [f2a122704894d5c8a04ed9e9c7215d82df671c9a0daf6fc8d12c524573aa0fba] <==
	{"level":"info","ts":"2025-12-06T09:50:39.482579Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-06T09:50:39.482281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-06T09:50:39.48272Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-06T09:50:39.482814Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:50:39.482843Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:50:39.484708Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-06T09:50:39.484975Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-06T09:50:39.485014Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-06T09:50:39.485499Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T09:50:39.485567Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T09:50:40.971263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-06T09:50:40.971313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-06T09:50:40.97134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-06T09:50:40.971351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.971356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.971364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.971372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.972291Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-507108 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-06T09:50:40.972307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:50:40.972314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:50:40.972618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-06T09:50:40.972653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-06T09:50:40.973625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-06T09:50:40.973664Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-06T09:51:23.415629Z","caller":"traceutil/trace.go:171","msg":"trace[1990098218] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"152.916154ms","start":"2025-12-06T09:51:23.262671Z","end":"2025-12-06T09:51:23.415587Z","steps":["trace[1990098218] 'process raft request'  (duration: 75.936244ms)","trace[1990098218] 'compare'  (duration: 76.805389ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:51:39 up  2:34,  0 user,  load average: 2.83, 2.35, 3.09
	Linux old-k8s-version-507108 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b68753786d00a4bb60e47f31e486200fabaaeb743cbffa572339af4be74a216] <==
	I1206 09:50:43.495254       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:50:43.495626       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:50:43.495791       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:50:43.495854       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:50:43.495896       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:50:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:50:43.695735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:50:43.764415       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:50:43.764505       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:50:43.864433       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:50:44.064615       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:50:44.064643       1 metrics.go:72] Registering metrics
	I1206 09:50:44.064699       1 controller.go:711] "Syncing nftables rules"
	I1206 09:50:53.695957       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:50:53.696008       1 main.go:301] handling current node
	I1206 09:51:03.699513       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:03.699553       1 main.go:301] handling current node
	I1206 09:51:13.696694       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:13.696723       1 main.go:301] handling current node
	I1206 09:51:23.696049       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:23.696089       1 main.go:301] handling current node
	I1206 09:51:33.699759       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:33.699791       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c63bd209bc99c4f282c51f22f469fa540401634190be26711d150482c1f373d7] <==
	I1206 09:50:41.929591       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:50:41.946949       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1206 09:50:41.980299       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1206 09:50:41.980321       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1206 09:50:41.980353       1 shared_informer.go:318] Caches are synced for configmaps
	I1206 09:50:41.980401       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 09:50:41.980408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:50:41.980665       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1206 09:50:41.981152       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1206 09:50:41.981261       1 aggregator.go:166] initial CRD sync complete...
	I1206 09:50:41.981279       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 09:50:41.981287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:50:41.981297       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:50:42.796047       1 controller.go:624] quota admission added evaluator for: namespaces
	I1206 09:50:42.837804       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 09:50:42.858206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:50:42.869059       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:50:42.877043       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 09:50:42.884575       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:50:42.927035       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.8.96"}
	I1206 09:50:42.941884       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.144.212"}
	I1206 09:50:54.150175       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1206 09:50:54.402090       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:50:54.402091       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:50:54.450026       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c1a010de0842e99d0ec887649d5d92d06a2a5d0238c5a7ead0bc168b3d098af0] <==
	I1206 09:50:54.179171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="83.791µs"
	I1206 09:50:54.187581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.44µs"
	I1206 09:50:54.188339       1 shared_informer.go:318] Caches are synced for endpoint
	I1206 09:50:54.212935       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1206 09:50:54.217484       1 shared_informer.go:318] Caches are synced for ephemeral
	I1206 09:50:54.230816       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:50:54.237964       1 shared_informer.go:318] Caches are synced for stateful set
	I1206 09:50:54.257588       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1206 09:50:54.261919       1 shared_informer.go:318] Caches are synced for expand
	I1206 09:50:54.264301       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:50:54.284842       1 shared_informer.go:318] Caches are synced for attach detach
	I1206 09:50:54.305736       1 shared_informer.go:318] Caches are synced for persistent volume
	I1206 09:50:54.312014       1 shared_informer.go:318] Caches are synced for PVC protection
	I1206 09:50:54.682842       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:50:54.747271       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:50:54.747305       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1206 09:50:58.992306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.924087ms"
	I1206 09:50:58.992474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.901µs"
	I1206 09:51:05.034766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.597µs"
	I1206 09:51:06.006304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.35µs"
	I1206 09:51:07.013416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.367µs"
	I1206 09:51:21.426420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.793166ms"
	I1206 09:51:21.426561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.855µs"
	I1206 09:51:29.069912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.902µs"
	I1206 09:51:34.482218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.276µs"
	
	
	==> kube-proxy [3eee5f1d4d11750ebd0190ccbd3ec6afbf1f4658b007f40bf277f2f27891ed47] <==
	I1206 09:50:43.344000       1 server_others.go:69] "Using iptables proxy"
	I1206 09:50:43.354953       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1206 09:50:43.375539       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:50:43.378025       1 server_others.go:152] "Using iptables Proxier"
	I1206 09:50:43.378061       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 09:50:43.378069       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 09:50:43.378097       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 09:50:43.378329       1 server.go:846] "Version info" version="v1.28.0"
	I1206 09:50:43.378344       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:50:43.379004       1 config.go:97] "Starting endpoint slice config controller"
	I1206 09:50:43.379017       1 config.go:188] "Starting service config controller"
	I1206 09:50:43.379040       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 09:50:43.379042       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 09:50:43.379270       1 config.go:315] "Starting node config controller"
	I1206 09:50:43.379380       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 09:50:43.479209       1 shared_informer.go:318] Caches are synced for service config
	I1206 09:50:43.479254       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 09:50:43.479509       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0ad3331f74421cc30e54a8d7ed856cf70c19c103394bc37a9e789e115ed3c2b7] <==
	I1206 09:50:39.954404       1 serving.go:348] Generated self-signed cert in-memory
	W1206 09:50:41.918838       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:50:41.918876       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:50:41.918891       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:50:41.918910       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:50:41.937321       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1206 09:50:41.937358       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:50:41.938815       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:50:41.938863       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 09:50:41.939640       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1206 09:50:41.939683       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1206 09:50:42.039761       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.170131     736 topology_manager.go:215] "Topology Admit Handler" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-pntp4"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294602     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cadf548c-150e-4634-bed4-cec0c3fc5041-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-bfcks\" (UID: \"cadf548c-150e-4634-bed4-cec0c3fc5041\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfcks"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294649     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8n9\" (UniqueName: \"kubernetes.io/projected/64ea7f78-1d44-49cc-b8cb-b342159aa307-kube-api-access-fl8n9\") pod \"dashboard-metrics-scraper-5f989dc9cf-pntp4\" (UID: \"64ea7f78-1d44-49cc-b8cb-b342159aa307\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294670     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zll\" (UniqueName: \"kubernetes.io/projected/cadf548c-150e-4634-bed4-cec0c3fc5041-kube-api-access-56zll\") pod \"kubernetes-dashboard-8694d4445c-bfcks\" (UID: \"cadf548c-150e-4634-bed4-cec0c3fc5041\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfcks"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294691     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/64ea7f78-1d44-49cc-b8cb-b342159aa307-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pntp4\" (UID: \"64ea7f78-1d44-49cc-b8cb-b342159aa307\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4"
	Dec 06 09:50:58 old-k8s-version-507108 kubelet[736]: I1206 09:50:58.983931     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfcks" podStartSLOduration=0.704416073 podCreationTimestamp="2025-12-06 09:50:54 +0000 UTC" firstStartedPulling="2025-12-06 09:50:54.489418558 +0000 UTC m=+15.676631241" lastFinishedPulling="2025-12-06 09:50:58.768872575 +0000 UTC m=+19.956085267" observedRunningTime="2025-12-06 09:50:58.983824364 +0000 UTC m=+20.171037066" watchObservedRunningTime="2025-12-06 09:50:58.983870099 +0000 UTC m=+20.171082798"
	Dec 06 09:51:04 old-k8s-version-507108 kubelet[736]: I1206 09:51:04.990150     736 scope.go:117] "RemoveContainer" containerID="f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128"
	Dec 06 09:51:05 old-k8s-version-507108 kubelet[736]: I1206 09:51:05.994672     736 scope.go:117] "RemoveContainer" containerID="f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128"
	Dec 06 09:51:05 old-k8s-version-507108 kubelet[736]: I1206 09:51:05.994796     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:05 old-k8s-version-507108 kubelet[736]: E1206 09:51:05.995161     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:06 old-k8s-version-507108 kubelet[736]: I1206 09:51:06.999713     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:07 old-k8s-version-507108 kubelet[736]: E1206 09:51:07.000120     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:14 old-k8s-version-507108 kubelet[736]: I1206 09:51:14.019008     736 scope.go:117] "RemoveContainer" containerID="75357b2b778c96496d8c4298aeb32324c9a83e9955f0c8b8385c30a0381501f1"
	Dec 06 09:51:14 old-k8s-version-507108 kubelet[736]: I1206 09:51:14.472562     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:14 old-k8s-version-507108 kubelet[736]: E1206 09:51:14.472937     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:28 old-k8s-version-507108 kubelet[736]: I1206 09:51:28.904046     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:29 old-k8s-version-507108 kubelet[736]: I1206 09:51:29.059381     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:29 old-k8s-version-507108 kubelet[736]: I1206 09:51:29.059638     736 scope.go:117] "RemoveContainer" containerID="547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824"
	Dec 06 09:51:29 old-k8s-version-507108 kubelet[736]: E1206 09:51:29.060004     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:34 old-k8s-version-507108 kubelet[736]: I1206 09:51:34.472555     736 scope.go:117] "RemoveContainer" containerID="547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824"
	Dec 06 09:51:34 old-k8s-version-507108 kubelet[736]: E1206 09:51:34.472855     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: kubelet.service: Consumed 1.583s CPU time.
	
	
	==> kubernetes-dashboard [8cc53972a60a1088f14adc164b77fa7b466008c043e3b641e5958fa45bf8a14b] <==
	2025/12/06 09:50:58 Using namespace: kubernetes-dashboard
	2025/12/06 09:50:58 Using in-cluster config to connect to apiserver
	2025/12/06 09:50:58 Using secret token for csrf signing
	2025/12/06 09:50:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:50:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:50:58 Successful initial request to the apiserver, version: v1.28.0
	2025/12/06 09:50:58 Generating JWE encryption key
	2025/12/06 09:50:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:50:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:50:59 Initializing JWE encryption key from synchronized object
	2025/12/06 09:50:59 Creating in-cluster Sidecar client
	2025/12/06 09:50:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:50:59 Serving insecurely on HTTP port: 9090
	2025/12/06 09:51:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:50:58 Starting overwatch
	
	
	==> storage-provisioner [42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2] <==
	I1206 09:51:14.069069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:51:14.078583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:51:14.078661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 09:51:31.476977       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:51:31.477191       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-507108_9f37a9b5-64b5-4140-80e1-561ff8316e7c!
	I1206 09:51:31.477218       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ae82bcf-d8bd-405a-ab47-eb637aa10d2b", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-507108_9f37a9b5-64b5-4140-80e1-561ff8316e7c became leader
	I1206 09:51:31.578170       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-507108_9f37a9b5-64b5-4140-80e1-561ff8316e7c!
	
	
	==> storage-provisioner [75357b2b778c96496d8c4298aeb32324c9a83e9955f0c8b8385c30a0381501f1] <==
	I1206 09:50:43.308484       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:51:13.310880       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507108 -n old-k8s-version-507108
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507108 -n old-k8s-version-507108: exit status 2 (396.734118ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-507108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-507108
helpers_test.go:243: (dbg) docker inspect old-k8s-version-507108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3",
	        "Created": "2025-12-06T09:49:19.254369634Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 753812,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:50:32.812726363Z",
	            "FinishedAt": "2025-12-06T09:50:31.882176907Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/hosts",
	        "LogPath": "/var/lib/docker/containers/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3/e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3-json.log",
	        "Name": "/old-k8s-version-507108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-507108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-507108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e36525fbfc60710e8c241a0dad011066c01ed8eea0b21320e4b897eda4ff23b3",
	                "LowerDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2bdcaf10b71cad7976ab52fd89b21d65f99b6622e47b57bf6b519ba77e1d93bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-507108",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-507108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-507108",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-507108",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-507108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3a6aa59344a8fac1599fe36ddaf60cf95e009ee6f0ba5a6591a56e7cf50759ff",
	            "SandboxKey": "/var/run/docker/netns/3a6aa59344a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-507108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "68b5b112ecd8d43eda4b45466a2546c01f5d267b315a697829fb79471d3e3a2b",
	                    "EndpointID": "ee3e0a5eeff9b67a8a4e937cef86f78c5df7c8558cff01c0e2efa231f2593617",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "8e:33:79:22:06:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-507108",
	                        "e36525fbfc60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108: exit status 2 (377.586637ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-507108 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-507108 logs -n 25: (1.252319606s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-983381 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo containerd config dump                                                                                                                                                                                                  │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo crio config                                                                                                                                                                                                             │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ delete  │ -p cilium-983381                                                                                                                                                                                                                              │ cilium-983381             │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:49 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │                     │
	│ stop    │ -p old-k8s-version-507108 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-507108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-669264    │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p cert-expiration-669264                                                                                                                                                                                                                     │ cert-expiration-669264    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-521770         │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ image   │ old-k8s-version-507108 image list --format=json                                                                                                                                                                                               │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ pause   │ -p old-k8s-version-507108 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-507108    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-581224 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-581224 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:51:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:51:36.624765  766954 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:51:36.624875  766954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:36.624883  766954 out.go:374] Setting ErrFile to fd 2...
	I1206 09:51:36.624887  766954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:36.625137  766954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:51:36.625747  766954 out.go:368] Setting JSON to false
	I1206 09:51:36.627227  766954 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9241,"bootTime":1765005456,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:51:36.627302  766954 start.go:143] virtualization: kvm guest
	I1206 09:51:36.628867  766954 out.go:179] * [kubernetes-upgrade-581224] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:51:36.630222  766954 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:51:36.630245  766954 notify.go:221] Checking for updates...
	I1206 09:51:36.632077  766954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:51:36.633171  766954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:51:36.634151  766954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:51:36.635101  766954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:51:36.635978  766954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:51:36.637308  766954 config.go:182] Loaded profile config "kubernetes-upgrade-581224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:51:36.637888  766954 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:51:36.662991  766954 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:51:36.663089  766954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:51:36.729754  766954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:85 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-06 09:51:36.718716247 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:51:36.729880  766954 docker.go:319] overlay module found
	I1206 09:51:36.731392  766954 out.go:179] * Using the docker driver based on existing profile
	I1206 09:51:36.732387  766954 start.go:309] selected driver: docker
	I1206 09:51:36.732407  766954 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-581224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-581224 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:51:36.732576  766954 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:51:36.733364  766954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:51:36.800626  766954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:85 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-06 09:51:36.790429922 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:51:36.801013  766954 cni.go:84] Creating CNI manager for ""
	I1206 09:51:36.801098  766954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:51:36.801156  766954 start.go:353] cluster config:
	{Name:kubernetes-upgrade-581224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-581224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:51:36.802878  766954 out.go:179] * Starting "kubernetes-upgrade-581224" primary control-plane node in "kubernetes-upgrade-581224" cluster
	I1206 09:51:36.810504  766954 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:51:36.811720  766954 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:51:36.812729  766954 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:51:36.812768  766954 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:51:36.812778  766954 cache.go:65] Caching tarball of preloaded images
	I1206 09:51:36.812840  766954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:51:36.812874  766954 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:51:36.812885  766954 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:51:36.813015  766954 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/config.json ...
	I1206 09:51:36.841261  766954 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:51:36.841280  766954 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:51:36.841297  766954 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:51:36.841329  766954 start.go:360] acquireMachinesLock for kubernetes-upgrade-581224: {Name:mk8ad71ba73205dbcda5171bd6eed20a2901a214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:51:36.841382  766954 start.go:364] duration metric: took 35.71µs to acquireMachinesLock for "kubernetes-upgrade-581224"
	I1206 09:51:36.841410  766954 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:51:36.841415  766954 fix.go:54] fixHost starting: 
	I1206 09:51:36.841672  766954 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-581224 --format={{.State.Status}}
	I1206 09:51:36.860467  766954 fix.go:112] recreateIfNeeded on kubernetes-upgrade-581224: state=Running err=<nil>
	W1206 09:51:36.860501  766954 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:51:36.197927  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:36.697116  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:37.197172  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:37.697261  760217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:51:37.816767  760217 kubeadm.go:1114] duration metric: took 5.203365103s to wait for elevateKubeSystemPrivileges
	I1206 09:51:37.816808  760217 kubeadm.go:403] duration metric: took 13.347503099s to StartCluster
	I1206 09:51:37.816830  760217 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:37.816908  760217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:51:37.818670  760217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:37.856530  760217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:51:37.856595  760217 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:51:37.856838  760217 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:51:37.856892  760217 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:51:37.858040  760217 addons.go:70] Setting default-storageclass=true in profile "no-preload-521770"
	I1206 09:51:37.858097  760217 addons.go:70] Setting storage-provisioner=true in profile "no-preload-521770"
	I1206 09:51:37.858118  760217 addons.go:239] Setting addon storage-provisioner=true in "no-preload-521770"
	I1206 09:51:37.858187  760217 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521770"
	I1206 09:51:37.858336  760217 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:51:37.859191  760217 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:51:37.859940  760217 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:51:37.867161  760217 out.go:179] * Verifying Kubernetes components...
	I1206 09:51:37.869438  760217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:51:37.892635  760217 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:51:37.893403  760217 addons.go:239] Setting addon default-storageclass=true in "no-preload-521770"
	I1206 09:51:37.894029  760217 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:51:37.894081  760217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:51:37.894156  760217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:51:37.894716  760217 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:51:37.896684  760217 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:51:37.925952  760217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:51:37.929421  760217 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:51:37.929448  760217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:51:37.929529  760217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:51:37.969364  760217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:51:37.992698  760217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:51:38.060652  760217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:51:38.080647  760217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:51:38.119609  760217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:51:38.265373  760217 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1206 09:51:38.489105  760217 node_ready.go:35] waiting up to 6m0s for node "no-preload-521770" to be "Ready" ...
	I1206 09:51:38.498724  760217 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:51:37.837540  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:51:37.838049  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:51:37.838108  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:51:37.838168  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:51:37.894149  725997 cri.go:89] found id: "ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:37.894170  725997 cri.go:89] found id: ""
	I1206 09:51:37.894181  725997 logs.go:282] 1 containers: [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c]
	I1206 09:51:37.894248  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:37.902132  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:51:37.902267  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:51:37.977957  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:37.978032  725997 cri.go:89] found id: ""
	I1206 09:51:37.978044  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:51:37.978101  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:37.983944  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:51:37.984014  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:51:38.046184  725997 cri.go:89] found id: ""
	I1206 09:51:38.046208  725997 logs.go:282] 0 containers: []
	W1206 09:51:38.046217  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:51:38.046225  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:51:38.046301  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:51:38.103540  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:38.103566  725997 cri.go:89] found id: ""
	I1206 09:51:38.103576  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:51:38.103652  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:38.109274  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:51:38.109353  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:51:38.164957  725997 cri.go:89] found id: ""
	I1206 09:51:38.164989  725997 logs.go:282] 0 containers: []
	W1206 09:51:38.165001  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:51:38.165010  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:51:38.165072  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:51:38.226635  725997 cri.go:89] found id: "f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	I1206 09:51:38.226660  725997 cri.go:89] found id: ""
	I1206 09:51:38.226670  725997 logs.go:282] 1 containers: [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113]
	I1206 09:51:38.226732  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:38.236390  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:51:38.236497  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:51:38.290670  725997 cri.go:89] found id: ""
	I1206 09:51:38.290807  725997 logs.go:282] 0 containers: []
	W1206 09:51:38.290832  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:51:38.290847  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:51:38.290921  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:51:38.347702  725997 cri.go:89] found id: ""
	I1206 09:51:38.347728  725997 logs.go:282] 0 containers: []
	W1206 09:51:38.347738  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:51:38.347761  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:51:38.347776  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:51:38.432799  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:51:38.432821  725997 logs.go:123] Gathering logs for kube-apiserver [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c] ...
	I1206 09:51:38.432836  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:38.487009  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:51:38.487049  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:38.534149  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:51:38.534183  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:38.617256  725997 logs.go:123] Gathering logs for kube-controller-manager [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113] ...
	I1206 09:51:38.617297  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	I1206 09:51:38.655361  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:51:38.655386  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:51:38.777964  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:51:38.778007  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:51:38.804123  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:51:38.804159  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:51:38.865857  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:51:38.865895  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:51:36.861863  766954 out.go:252] * Updating the running docker "kubernetes-upgrade-581224" container ...
	I1206 09:51:36.861888  766954 machine.go:94] provisionDockerMachine start ...
	I1206 09:51:36.861956  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:36.882707  766954 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:36.882991  766954 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33146 <nil> <nil>}
	I1206 09:51:36.883006  766954 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:51:37.019013  766954 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-581224
	
	I1206 09:51:37.019041  766954 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-581224"
	I1206 09:51:37.019105  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:37.041828  766954 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:37.042060  766954 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33146 <nil> <nil>}
	I1206 09:51:37.042073  766954 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-581224 && echo "kubernetes-upgrade-581224" | sudo tee /etc/hostname
	I1206 09:51:37.181329  766954 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-581224
	
	I1206 09:51:37.181418  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:37.199247  766954 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:37.199546  766954 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33146 <nil> <nil>}
	I1206 09:51:37.199566  766954 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-581224' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-581224/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-581224' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:51:37.332550  766954 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:51:37.332582  766954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:51:37.332661  766954 ubuntu.go:190] setting up certificates
	I1206 09:51:37.332673  766954 provision.go:84] configureAuth start
	I1206 09:51:37.332732  766954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-581224
	I1206 09:51:37.350925  766954 provision.go:143] copyHostCerts
	I1206 09:51:37.350986  766954 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:51:37.350995  766954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:51:37.351061  766954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:51:37.351174  766954 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:51:37.351187  766954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:51:37.351217  766954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:51:37.351276  766954 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:51:37.351283  766954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:51:37.351308  766954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:51:37.351356  766954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-581224 san=[127.0.0.1 192.168.103.2 kubernetes-upgrade-581224 localhost minikube]
	I1206 09:51:37.378784  766954 provision.go:177] copyRemoteCerts
	I1206 09:51:37.378855  766954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:51:37.378917  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:37.402528  766954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kubernetes-upgrade-581224/id_rsa Username:docker}
	I1206 09:51:37.509146  766954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 09:51:37.533060  766954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:51:37.553242  766954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:51:37.571925  766954 provision.go:87] duration metric: took 239.238117ms to configureAuth
	I1206 09:51:37.571954  766954 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:51:37.572123  766954 config.go:182] Loaded profile config "kubernetes-upgrade-581224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:51:37.572255  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:37.591487  766954 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:37.591820  766954 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33146 <nil> <nil>}
	I1206 09:51:37.591853  766954 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:51:38.435164  766954 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:51:38.435188  766954 machine.go:97] duration metric: took 1.573291512s to provisionDockerMachine
	I1206 09:51:38.435203  766954 start.go:293] postStartSetup for "kubernetes-upgrade-581224" (driver="docker")
	I1206 09:51:38.435217  766954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:51:38.435282  766954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:51:38.435355  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:38.460050  766954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kubernetes-upgrade-581224/id_rsa Username:docker}
	I1206 09:51:38.566966  766954 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:51:38.572220  766954 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:51:38.572274  766954 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:51:38.572287  766954 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:51:38.572347  766954 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:51:38.572507  766954 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:51:38.572639  766954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:51:38.582160  766954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:51:38.602079  766954 start.go:296] duration metric: took 166.861394ms for postStartSetup
	I1206 09:51:38.602144  766954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:51:38.602203  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:38.621260  766954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kubernetes-upgrade-581224/id_rsa Username:docker}
	I1206 09:51:38.718025  766954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:51:38.723741  766954 fix.go:56] duration metric: took 1.882319371s for fixHost
	I1206 09:51:38.723765  766954 start.go:83] releasing machines lock for "kubernetes-upgrade-581224", held for 1.882362204s
	I1206 09:51:38.723824  766954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-581224
	I1206 09:51:38.744926  766954 ssh_runner.go:195] Run: cat /version.json
	I1206 09:51:38.744996  766954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:51:38.744998  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:38.745066  766954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-581224
	I1206 09:51:38.766510  766954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kubernetes-upgrade-581224/id_rsa Username:docker}
	I1206 09:51:38.768415  766954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kubernetes-upgrade-581224/id_rsa Username:docker}
	I1206 09:51:38.865974  766954 ssh_runner.go:195] Run: systemctl --version
	I1206 09:51:38.931195  766954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:51:38.983426  766954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:51:38.989440  766954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:51:38.989533  766954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:51:39.002145  766954 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:51:39.002173  766954 start.go:496] detecting cgroup driver to use...
	I1206 09:51:39.002210  766954 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:51:39.002267  766954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:51:39.019001  766954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:51:39.034302  766954 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:51:39.034367  766954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:51:39.051745  766954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:51:39.065359  766954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:51:39.205709  766954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:51:39.356233  766954 docker.go:234] disabling docker service ...
	I1206 09:51:39.356307  766954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:51:39.377036  766954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:51:39.395646  766954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:51:39.544575  766954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:51:39.713549  766954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:51:39.732107  766954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:51:39.751955  766954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:51:39.752031  766954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:39.763528  766954 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:51:39.763596  766954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:39.775736  766954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:39.788291  766954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:39.801782  766954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:51:39.812041  766954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:39.822437  766954 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:39.833173  766954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:39.844686  766954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:51:39.855296  766954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:51:39.865919  766954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:51:40.013923  766954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:51:40.545387  766954 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:51:40.545495  766954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:51:40.550058  766954 start.go:564] Will wait 60s for crictl version
	I1206 09:51:40.550120  766954 ssh_runner.go:195] Run: which crictl
	I1206 09:51:40.554644  766954 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:51:40.589886  766954 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:51:40.589973  766954 ssh_runner.go:195] Run: crio --version
	I1206 09:51:40.625631  766954 ssh_runner.go:195] Run: crio --version
	I1206 09:51:40.660996  766954 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 06 09:51:05 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:05.194422099Z" level=info msg="Started container" PID=1736 containerID=6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper id=91f17762-1dde-41df-b5d4-deb7885fe517 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a656bc53f746466b562a416a88688b94a905a32e7c17b373e6ff726b3fa61f7b
	Dec 06 09:51:05 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:05.996298078Z" level=info msg="Removing container: f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128" id=d5e88e92-336b-4303-8ec8-8fb973f51044 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:51:06 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:06.006639758Z" level=info msg="Removed container f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=d5e88e92-336b-4303-8ec8-8fb973f51044 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.019533937Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=024005c1-88f3-43df-8dda-4897113f146a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.020561257Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a747abdb-d1ba-4472-b096-5e975bd204ac name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.021608253Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c8b08ce4-b6d0-4862-bca3-0a610bb35096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.021759814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026313499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026552249Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d840de3023678b8bdf075bd94db8b568d4b278337e1eee760cfe86302a7f5b59/merged/etc/passwd: no such file or directory"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026584529Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d840de3023678b8bdf075bd94db8b568d4b278337e1eee760cfe86302a7f5b59/merged/etc/group: no such file or directory"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.026881739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.052694109Z" level=info msg="Created container 42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2: kube-system/storage-provisioner/storage-provisioner" id=c8b08ce4-b6d0-4862-bca3-0a610bb35096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.053323756Z" level=info msg="Starting container: 42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2" id=43d87ddd-e37a-4b09-b778-34147784f4da name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:51:14 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:14.055583804Z" level=info msg="Started container" PID=1750 containerID=42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2 description=kube-system/storage-provisioner/storage-provisioner id=43d87ddd-e37a-4b09-b778-34147784f4da name=/runtime.v1.RuntimeService/StartContainer sandboxID=14e580adce428b36c9132236baa7509a51b9c4497356baf599f00cd2b70bef3e
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.90483697Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7a27758b-190f-41ab-b4e5-ee9207c5633e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.90589161Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=812606f0-db87-4599-a936-b9b0358e1153 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.906927036Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=1ab84a79-e372-4e35-b891-75935b3af6c4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.907071077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.913001926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.913560924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.942620494Z" level=info msg="Created container 547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=1ab84a79-e372-4e35-b891-75935b3af6c4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.943127663Z" level=info msg="Starting container: 547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824" id=7ab79706-36ad-436d-a518-7cf9f5a64ac1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:51:28 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:28.94476337Z" level=info msg="Started container" PID=1786 containerID=547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper id=7ab79706-36ad-436d-a518-7cf9f5a64ac1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a656bc53f746466b562a416a88688b94a905a32e7c17b373e6ff726b3fa61f7b
	Dec 06 09:51:29 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:29.060675573Z" level=info msg="Removing container: 6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5" id=d6ca9d31-1359-4096-ab4b-88311ea40d0c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:51:29 old-k8s-version-507108 crio[572]: time="2025-12-06T09:51:29.070419941Z" level=info msg="Removed container 6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4/dashboard-metrics-scraper" id=d6ca9d31-1359-4096-ab4b-88311ea40d0c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	547e878010f36       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   2                   a656bc53f7464       dashboard-metrics-scraper-5f989dc9cf-pntp4       kubernetes-dashboard
	42a8d521b2204       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   14e580adce428       storage-provisioner                              kube-system
	8cc53972a60a1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago       Running             kubernetes-dashboard        0                   f29aab9d1c7b8       kubernetes-dashboard-8694d4445c-bfcks            kubernetes-dashboard
	e7cdddacd3684       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   da29f54d45cf6       coredns-5dd5756b68-qvppb                         kube-system
	81bb7d7fe77eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   31a7bf1d910c2       busybox                                          default
	75357b2b778c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   14e580adce428       storage-provisioner                              kube-system
	3eee5f1d4d117       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   03e226c026636       kube-proxy-q6xpd                                 kube-system
	0b68753786d00       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   199da8c3b7659       kindnet-pdc9w                                    kube-system
	c63bd209bc99c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   3c82706784dcc       kube-apiserver-old-k8s-version-507108            kube-system
	c1a010de0842e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   39b10f01f78a8       kube-controller-manager-old-k8s-version-507108   kube-system
	f2a122704894d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   de07cfa549c78       etcd-old-k8s-version-507108                      kube-system
	0ad3331f74421       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   e597ffc303f98       kube-scheduler-old-k8s-version-507108            kube-system
	
	
	==> coredns [e7cdddacd3684d67407466e547704422e7b3d83504f8602a5eed09903630559d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33903 - 46442 "HINFO IN 2567026679596900439.7854628332536336291. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031751049s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-507108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-507108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=old-k8s-version-507108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_49_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:49:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-507108
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:51:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:51:12 +0000   Sat, 06 Dec 2025 09:50:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-507108
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                9c098f98-750a-4e92-a2e1-303c4ddd2d10
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-qvppb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-507108                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-pdc9w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-507108             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-507108    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-q6xpd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-507108             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pntp4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bfcks             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s               kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s               kubelet          Node old-k8s-version-507108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s               kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node old-k8s-version-507108 event: Registered Node old-k8s-version-507108 in Controller
	  Normal  NodeReady                100s               kubelet          Node old-k8s-version-507108 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 63s)  kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 63s)  kubelet          Node old-k8s-version-507108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 63s)  kubelet          Node old-k8s-version-507108 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node old-k8s-version-507108 event: Registered Node old-k8s-version-507108 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [f2a122704894d5c8a04ed9e9c7215d82df671c9a0daf6fc8d12c524573aa0fba] <==
	{"level":"info","ts":"2025-12-06T09:50:39.482579Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-06T09:50:39.482281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-06T09:50:39.48272Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-06T09:50:39.482814Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:50:39.482843Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:50:39.484708Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-06T09:50:39.484975Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-06T09:50:39.485014Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-06T09:50:39.485499Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T09:50:39.485567Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T09:50:40.971263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-06T09:50:40.971313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-06T09:50:40.97134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-06T09:50:40.971351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.971356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.971364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.971372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-06T09:50:40.972291Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-507108 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-06T09:50:40.972307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:50:40.972314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:50:40.972618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-06T09:50:40.972653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-06T09:50:40.973625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-06T09:50:40.973664Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-06T09:51:23.415629Z","caller":"traceutil/trace.go:171","msg":"trace[1990098218] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"152.916154ms","start":"2025-12-06T09:51:23.262671Z","end":"2025-12-06T09:51:23.415587Z","steps":["trace[1990098218] 'process raft request'  (duration: 75.936244ms)","trace[1990098218] 'compare'  (duration: 76.805389ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:51:41 up  2:34,  0 user,  load average: 2.83, 2.35, 3.09
	Linux old-k8s-version-507108 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b68753786d00a4bb60e47f31e486200fabaaeb743cbffa572339af4be74a216] <==
	I1206 09:50:43.495254       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:50:43.495626       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:50:43.495791       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:50:43.495854       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:50:43.495896       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:50:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:50:43.695735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:50:43.764415       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:50:43.764505       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:50:43.864433       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:50:44.064615       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:50:44.064643       1 metrics.go:72] Registering metrics
	I1206 09:50:44.064699       1 controller.go:711] "Syncing nftables rules"
	I1206 09:50:53.695957       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:50:53.696008       1 main.go:301] handling current node
	I1206 09:51:03.699513       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:03.699553       1 main.go:301] handling current node
	I1206 09:51:13.696694       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:13.696723       1 main.go:301] handling current node
	I1206 09:51:23.696049       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:23.696089       1 main.go:301] handling current node
	I1206 09:51:33.699759       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:51:33.699791       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c63bd209bc99c4f282c51f22f469fa540401634190be26711d150482c1f373d7] <==
	I1206 09:50:41.929591       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:50:41.946949       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1206 09:50:41.980299       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1206 09:50:41.980321       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1206 09:50:41.980353       1 shared_informer.go:318] Caches are synced for configmaps
	I1206 09:50:41.980401       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 09:50:41.980408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:50:41.980665       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1206 09:50:41.981152       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1206 09:50:41.981261       1 aggregator.go:166] initial CRD sync complete...
	I1206 09:50:41.981279       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 09:50:41.981287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:50:41.981297       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:50:42.796047       1 controller.go:624] quota admission added evaluator for: namespaces
	I1206 09:50:42.837804       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 09:50:42.858206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:50:42.869059       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:50:42.877043       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 09:50:42.884575       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:50:42.927035       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.8.96"}
	I1206 09:50:42.941884       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.144.212"}
	I1206 09:50:54.150175       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1206 09:50:54.402090       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:50:54.402091       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:50:54.450026       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c1a010de0842e99d0ec887649d5d92d06a2a5d0238c5a7ead0bc168b3d098af0] <==
	I1206 09:50:54.179171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="83.791µs"
	I1206 09:50:54.187581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.44µs"
	I1206 09:50:54.188339       1 shared_informer.go:318] Caches are synced for endpoint
	I1206 09:50:54.212935       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1206 09:50:54.217484       1 shared_informer.go:318] Caches are synced for ephemeral
	I1206 09:50:54.230816       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:50:54.237964       1 shared_informer.go:318] Caches are synced for stateful set
	I1206 09:50:54.257588       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1206 09:50:54.261919       1 shared_informer.go:318] Caches are synced for expand
	I1206 09:50:54.264301       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:50:54.284842       1 shared_informer.go:318] Caches are synced for attach detach
	I1206 09:50:54.305736       1 shared_informer.go:318] Caches are synced for persistent volume
	I1206 09:50:54.312014       1 shared_informer.go:318] Caches are synced for PVC protection
	I1206 09:50:54.682842       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:50:54.747271       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:50:54.747305       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1206 09:50:58.992306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.924087ms"
	I1206 09:50:58.992474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.901µs"
	I1206 09:51:05.034766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.597µs"
	I1206 09:51:06.006304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.35µs"
	I1206 09:51:07.013416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.367µs"
	I1206 09:51:21.426420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.793166ms"
	I1206 09:51:21.426561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.855µs"
	I1206 09:51:29.069912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.902µs"
	I1206 09:51:34.482218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.276µs"
	
	
	==> kube-proxy [3eee5f1d4d11750ebd0190ccbd3ec6afbf1f4658b007f40bf277f2f27891ed47] <==
	I1206 09:50:43.344000       1 server_others.go:69] "Using iptables proxy"
	I1206 09:50:43.354953       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1206 09:50:43.375539       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:50:43.378025       1 server_others.go:152] "Using iptables Proxier"
	I1206 09:50:43.378061       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 09:50:43.378069       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 09:50:43.378097       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 09:50:43.378329       1 server.go:846] "Version info" version="v1.28.0"
	I1206 09:50:43.378344       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:50:43.379004       1 config.go:97] "Starting endpoint slice config controller"
	I1206 09:50:43.379017       1 config.go:188] "Starting service config controller"
	I1206 09:50:43.379040       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 09:50:43.379042       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 09:50:43.379270       1 config.go:315] "Starting node config controller"
	I1206 09:50:43.379380       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 09:50:43.479209       1 shared_informer.go:318] Caches are synced for service config
	I1206 09:50:43.479254       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 09:50:43.479509       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0ad3331f74421cc30e54a8d7ed856cf70c19c103394bc37a9e789e115ed3c2b7] <==
	I1206 09:50:39.954404       1 serving.go:348] Generated self-signed cert in-memory
	W1206 09:50:41.918838       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:50:41.918876       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:50:41.918891       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:50:41.918910       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:50:41.937321       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1206 09:50:41.937358       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:50:41.938815       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:50:41.938863       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 09:50:41.939640       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1206 09:50:41.939683       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1206 09:50:42.039761       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.170131     736 topology_manager.go:215] "Topology Admit Handler" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-pntp4"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294602     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cadf548c-150e-4634-bed4-cec0c3fc5041-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-bfcks\" (UID: \"cadf548c-150e-4634-bed4-cec0c3fc5041\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfcks"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294649     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8n9\" (UniqueName: \"kubernetes.io/projected/64ea7f78-1d44-49cc-b8cb-b342159aa307-kube-api-access-fl8n9\") pod \"dashboard-metrics-scraper-5f989dc9cf-pntp4\" (UID: \"64ea7f78-1d44-49cc-b8cb-b342159aa307\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294670     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zll\" (UniqueName: \"kubernetes.io/projected/cadf548c-150e-4634-bed4-cec0c3fc5041-kube-api-access-56zll\") pod \"kubernetes-dashboard-8694d4445c-bfcks\" (UID: \"cadf548c-150e-4634-bed4-cec0c3fc5041\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfcks"
	Dec 06 09:50:54 old-k8s-version-507108 kubelet[736]: I1206 09:50:54.294691     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/64ea7f78-1d44-49cc-b8cb-b342159aa307-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pntp4\" (UID: \"64ea7f78-1d44-49cc-b8cb-b342159aa307\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4"
	Dec 06 09:50:58 old-k8s-version-507108 kubelet[736]: I1206 09:50:58.983931     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfcks" podStartSLOduration=0.704416073 podCreationTimestamp="2025-12-06 09:50:54 +0000 UTC" firstStartedPulling="2025-12-06 09:50:54.489418558 +0000 UTC m=+15.676631241" lastFinishedPulling="2025-12-06 09:50:58.768872575 +0000 UTC m=+19.956085267" observedRunningTime="2025-12-06 09:50:58.983824364 +0000 UTC m=+20.171037066" watchObservedRunningTime="2025-12-06 09:50:58.983870099 +0000 UTC m=+20.171082798"
	Dec 06 09:51:04 old-k8s-version-507108 kubelet[736]: I1206 09:51:04.990150     736 scope.go:117] "RemoveContainer" containerID="f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128"
	Dec 06 09:51:05 old-k8s-version-507108 kubelet[736]: I1206 09:51:05.994672     736 scope.go:117] "RemoveContainer" containerID="f148cb198130412d56efd21f4a75cca26c1c4f877d545fb395c27db1880c4128"
	Dec 06 09:51:05 old-k8s-version-507108 kubelet[736]: I1206 09:51:05.994796     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:05 old-k8s-version-507108 kubelet[736]: E1206 09:51:05.995161     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:06 old-k8s-version-507108 kubelet[736]: I1206 09:51:06.999713     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:07 old-k8s-version-507108 kubelet[736]: E1206 09:51:07.000120     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:14 old-k8s-version-507108 kubelet[736]: I1206 09:51:14.019008     736 scope.go:117] "RemoveContainer" containerID="75357b2b778c96496d8c4298aeb32324c9a83e9955f0c8b8385c30a0381501f1"
	Dec 06 09:51:14 old-k8s-version-507108 kubelet[736]: I1206 09:51:14.472562     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:14 old-k8s-version-507108 kubelet[736]: E1206 09:51:14.472937     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:28 old-k8s-version-507108 kubelet[736]: I1206 09:51:28.904046     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:29 old-k8s-version-507108 kubelet[736]: I1206 09:51:29.059381     736 scope.go:117] "RemoveContainer" containerID="6f4ffadbc47d9fd1e7761b03f2217daf7a660c4fa967706c68ed8e6f6d4001b5"
	Dec 06 09:51:29 old-k8s-version-507108 kubelet[736]: I1206 09:51:29.059638     736 scope.go:117] "RemoveContainer" containerID="547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824"
	Dec 06 09:51:29 old-k8s-version-507108 kubelet[736]: E1206 09:51:29.060004     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:34 old-k8s-version-507108 kubelet[736]: I1206 09:51:34.472555     736 scope.go:117] "RemoveContainer" containerID="547e878010f365f88c8a3a0f081d8bae5624b0331db0e933aa03e59939ae4824"
	Dec 06 09:51:34 old-k8s-version-507108 kubelet[736]: E1206 09:51:34.472855     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pntp4_kubernetes-dashboard(64ea7f78-1d44-49cc-b8cb-b342159aa307)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pntp4" podUID="64ea7f78-1d44-49cc-b8cb-b342159aa307"
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:51:36 old-k8s-version-507108 systemd[1]: kubelet.service: Consumed 1.583s CPU time.
	
	
	==> kubernetes-dashboard [8cc53972a60a1088f14adc164b77fa7b466008c043e3b641e5958fa45bf8a14b] <==
	2025/12/06 09:50:58 Using namespace: kubernetes-dashboard
	2025/12/06 09:50:58 Using in-cluster config to connect to apiserver
	2025/12/06 09:50:58 Using secret token for csrf signing
	2025/12/06 09:50:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:50:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:50:58 Successful initial request to the apiserver, version: v1.28.0
	2025/12/06 09:50:58 Generating JWE encryption key
	2025/12/06 09:50:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:50:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:50:59 Initializing JWE encryption key from synchronized object
	2025/12/06 09:50:59 Creating in-cluster Sidecar client
	2025/12/06 09:50:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:50:59 Serving insecurely on HTTP port: 9090
	2025/12/06 09:51:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:50:58 Starting overwatch
	
	
	==> storage-provisioner [42a8d521b220467e5e032a08bf01808f2fa002d0db02219fe392e54c79a711b2] <==
	I1206 09:51:14.069069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:51:14.078583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:51:14.078661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 09:51:31.476977       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:51:31.477191       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-507108_9f37a9b5-64b5-4140-80e1-561ff8316e7c!
	I1206 09:51:31.477218       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ae82bcf-d8bd-405a-ab47-eb637aa10d2b", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-507108_9f37a9b5-64b5-4140-80e1-561ff8316e7c became leader
	I1206 09:51:31.578170       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-507108_9f37a9b5-64b5-4140-80e1-561ff8316e7c!
	
	
	==> storage-provisioner [75357b2b778c96496d8c4298aeb32324c9a83e9955f0c8b8385c30a0381501f1] <==
	I1206 09:50:43.308484       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:51:13.310880       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507108 -n old-k8s-version-507108
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507108 -n old-k8s-version-507108: exit status 2 (395.220872ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-507108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (304.084056ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-521770 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-521770 describe deploy/metrics-server -n kube-system: exit status 1 (80.148703ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-521770 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-521770
helpers_test.go:243: (dbg) docker inspect no-preload-521770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f",
	        "Created": "2025-12-06T09:51:06.611954102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 760665,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:51:06.649222822Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/hostname",
	        "HostsPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/hosts",
	        "LogPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f-json.log",
	        "Name": "/no-preload-521770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-521770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-521770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f",
	                "LowerDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-521770",
	                "Source": "/var/lib/docker/volumes/no-preload-521770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-521770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-521770",
	                "name.minikube.sigs.k8s.io": "no-preload-521770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "54f609dd01884b33beef45d2c365109bc2f8e54922e6b4784ddd61bc070a5e06",
	            "SandboxKey": "/var/run/docker/netns/54f609dd0188",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-521770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "335ab24bf65197b10f86bad2a0ebe3cc633e48da6bfe1bab2aae94fda11c69b4",
	                    "EndpointID": "fa3a7f4c3d1d3f2210857c4c013cc3ac83884df06905487036c9b2efec29fbb3",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "76:45:50:a3:e3:52",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-521770",
	                        "de37f97672bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-521770 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-521770 logs -n 25: (1.184169392s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-983381 sudo containerd config dump                                                                                                                                                                                                  │ cilium-983381                │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-983381                │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-983381                │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-983381                │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ ssh     │ -p cilium-983381 sudo crio config                                                                                                                                                                                                             │ cilium-983381                │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	│ delete  │ -p cilium-983381                                                                                                                                                                                                                              │ cilium-983381                │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:49 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │                     │
	│ stop    │ -p old-k8s-version-507108 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-507108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p cert-expiration-669264                                                                                                                                                                                                                     │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ image   │ old-k8s-version-507108 image list --format=json                                                                                                                                                                                               │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ pause   │ -p old-k8s-version-507108 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                     │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p kubernetes-upgrade-581224                                                                                                                                                                                                                  │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                     │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ delete  │ -p disable-driver-mounts-920129                                                                                                                                                                                                               │ disable-driver-mounts-920129 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:51:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:51:45.857201  771291 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:51:45.857295  771291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:45.857302  771291 out.go:374] Setting ErrFile to fd 2...
	I1206 09:51:45.857306  771291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:51:45.857544  771291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:51:45.858013  771291 out.go:368] Setting JSON to false
	I1206 09:51:45.859102  771291 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9250,"bootTime":1765005456,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:51:45.859159  771291 start.go:143] virtualization: kvm guest
	I1206 09:51:45.860908  771291 out.go:179] * [default-k8s-diff-port-759696] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:51:45.861935  771291 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:51:45.861945  771291 notify.go:221] Checking for updates...
	I1206 09:51:45.864410  771291 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:51:45.865434  771291 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:51:45.866391  771291 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:51:45.867509  771291 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:51:45.868620  771291 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:51:45.870213  771291 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:51:45.870353  771291 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:51:45.870494  771291 config.go:182] Loaded profile config "stopped-upgrade-031481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:51:45.870633  771291 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:51:45.894142  771291 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:51:45.894328  771291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:51:45.957031  771291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-06 09:51:45.944858978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:51:45.957140  771291 docker.go:319] overlay module found
	I1206 09:51:45.958650  771291 out.go:179] * Using the docker driver based on user configuration
	I1206 09:51:45.959653  771291 start.go:309] selected driver: docker
	I1206 09:51:45.959670  771291 start.go:927] validating driver "docker" against <nil>
	I1206 09:51:45.959682  771291 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:51:45.960248  771291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:51:46.027029  771291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-06 09:51:46.015554521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:51:46.027196  771291 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:51:46.027414  771291 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:51:46.029517  771291 out.go:179] * Using Docker driver with root privileges
	I1206 09:51:46.030728  771291 cni.go:84] Creating CNI manager for ""
	I1206 09:51:46.030808  771291 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:51:46.030825  771291 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:51:46.030910  771291 start.go:353] cluster config:
	{Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:51:46.032260  771291 out.go:179] * Starting "default-k8s-diff-port-759696" primary control-plane node in "default-k8s-diff-port-759696" cluster
	I1206 09:51:46.033350  771291 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:51:46.036060  771291 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:51:46.037312  771291 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:51:46.037349  771291 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:51:46.037362  771291 cache.go:65] Caching tarball of preloaded images
	I1206 09:51:46.037490  771291 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:51:46.037449  771291 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:51:46.037506  771291 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:51:46.037623  771291 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/config.json ...
	I1206 09:51:46.037651  771291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/config.json: {Name:mkee6572d7f196dab94ecefb421f12341b8ba313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:46.057595  771291 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:51:46.057613  771291 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:51:46.057627  771291 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:51:46.057657  771291 start.go:360] acquireMachinesLock for default-k8s-diff-port-759696: {Name:mk21dcb0f53684fa542f966aef3d4d221b10af2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:51:46.057741  771291 start.go:364] duration metric: took 64.512µs to acquireMachinesLock for "default-k8s-diff-port-759696"
	I1206 09:51:46.057761  771291 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:51:46.057824  771291 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:51:45.815101  771042 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:51:45.815372  771042 start.go:159] libmachine.API.Create for "embed-certs-997968" (driver="docker")
	I1206 09:51:45.815418  771042 client.go:173] LocalClient.Create starting
	I1206 09:51:45.815574  771042 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:51:45.815624  771042 main.go:143] libmachine: Decoding PEM data...
	I1206 09:51:45.815651  771042 main.go:143] libmachine: Parsing certificate...
	I1206 09:51:45.815729  771042 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:51:45.815758  771042 main.go:143] libmachine: Decoding PEM data...
	I1206 09:51:45.815775  771042 main.go:143] libmachine: Parsing certificate...
	I1206 09:51:45.816185  771042 cli_runner.go:164] Run: docker network inspect embed-certs-997968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:51:45.835265  771042 cli_runner.go:211] docker network inspect embed-certs-997968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:51:45.835337  771042 network_create.go:284] running [docker network inspect embed-certs-997968] to gather additional debugging logs...
	I1206 09:51:45.835361  771042 cli_runner.go:164] Run: docker network inspect embed-certs-997968
	W1206 09:51:45.853614  771042 cli_runner.go:211] docker network inspect embed-certs-997968 returned with exit code 1
	I1206 09:51:45.853658  771042 network_create.go:287] error running [docker network inspect embed-certs-997968]: docker network inspect embed-certs-997968: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-997968 not found
	I1206 09:51:45.853677  771042 network_create.go:289] output of [docker network inspect embed-certs-997968]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-997968 not found
	
	** /stderr **
	I1206 09:51:45.853786  771042 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:51:45.873559  771042 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:51:45.874198  771042 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:51:45.874837  771042 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:51:45.875239  771042 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ff25d0f3f317 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:b4:c0:5d:75:0d} reservation:<nil>}
	I1206 09:51:45.876038  771042 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e0c410}
	I1206 09:51:45.876070  771042 network_create.go:124] attempt to create docker network embed-certs-997968 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:51:45.876118  771042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-997968 embed-certs-997968
	I1206 09:51:45.927790  771042 network_create.go:108] docker network embed-certs-997968 192.168.85.0/24 created
	I1206 09:51:45.927840  771042 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-997968" container
	I1206 09:51:45.927902  771042 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:51:45.947587  771042 cli_runner.go:164] Run: docker volume create embed-certs-997968 --label name.minikube.sigs.k8s.io=embed-certs-997968 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:51:45.968180  771042 oci.go:103] Successfully created a docker volume embed-certs-997968
	I1206 09:51:45.968281  771042 cli_runner.go:164] Run: docker run --rm --name embed-certs-997968-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-997968 --entrypoint /usr/bin/test -v embed-certs-997968:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:51:46.403294  771042 oci.go:107] Successfully prepared a docker volume embed-certs-997968
	I1206 09:51:46.403391  771042 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:51:46.403411  771042 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:51:46.403529  771042 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-997968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:51:45.677089  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:51:45.677176  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:51:48.225300  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:51:48.225862  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:51:48.225922  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:51:48.225985  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:51:48.267811  725997 cri.go:89] found id: "ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:48.267839  725997 cri.go:89] found id: ""
	I1206 09:51:48.267849  725997 logs.go:282] 1 containers: [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c]
	I1206 09:51:48.267914  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:48.271842  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:51:48.271920  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:51:48.312999  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:48.313021  725997 cri.go:89] found id: ""
	I1206 09:51:48.313030  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:51:48.313083  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:48.317650  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:51:48.317727  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:51:48.358285  725997 cri.go:89] found id: ""
	I1206 09:51:48.358310  725997 logs.go:282] 0 containers: []
	W1206 09:51:48.358317  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:51:48.358323  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:51:48.358374  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:51:48.404130  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:48.404157  725997 cri.go:89] found id: ""
	I1206 09:51:48.404169  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:51:48.404231  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:48.408212  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:51:48.408280  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:51:48.452221  725997 cri.go:89] found id: ""
	I1206 09:51:48.452257  725997 logs.go:282] 0 containers: []
	W1206 09:51:48.452267  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:51:48.452282  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:51:48.452353  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:51:48.498975  725997 cri.go:89] found id: "f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	I1206 09:51:48.499004  725997 cri.go:89] found id: ""
	I1206 09:51:48.499016  725997 logs.go:282] 1 containers: [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113]
	I1206 09:51:48.499081  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:48.503848  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:51:48.503923  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:51:48.549335  725997 cri.go:89] found id: ""
	I1206 09:51:48.549367  725997 logs.go:282] 0 containers: []
	W1206 09:51:48.549377  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:51:48.549385  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:51:48.549451  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:51:48.586645  725997 cri.go:89] found id: ""
	I1206 09:51:48.586673  725997 logs.go:282] 0 containers: []
	W1206 09:51:48.586692  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:51:48.586709  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:51:48.586728  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:51:48.633020  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:51:48.633058  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:51:48.768451  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:51:48.768505  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:51:48.832604  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:51:48.832624  725997 logs.go:123] Gathering logs for kube-apiserver [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c] ...
	I1206 09:51:48.832637  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:48.869413  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:51:48.869446  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:48.904121  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:51:48.904158  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:48.984633  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:51:48.984669  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:51:49.028623  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:51:49.028664  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:51:49.049078  725997 logs.go:123] Gathering logs for kube-controller-manager [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113] ...
	I1206 09:51:49.049111  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	W1206 09:51:47.992783  760217 node_ready.go:57] node "no-preload-521770" has "Ready":"False" status (will retry)
	W1206 09:51:50.492628  760217 node_ready.go:57] node "no-preload-521770" has "Ready":"False" status (will retry)
	I1206 09:51:46.059696  771291 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:51:46.059908  771291 start.go:159] libmachine.API.Create for "default-k8s-diff-port-759696" (driver="docker")
	I1206 09:51:46.059943  771291 client.go:173] LocalClient.Create starting
	I1206 09:51:46.060025  771291 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:51:46.060059  771291 main.go:143] libmachine: Decoding PEM data...
	I1206 09:51:46.060079  771291 main.go:143] libmachine: Parsing certificate...
	I1206 09:51:46.060124  771291 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:51:46.060142  771291 main.go:143] libmachine: Decoding PEM data...
	I1206 09:51:46.060165  771291 main.go:143] libmachine: Parsing certificate...
	I1206 09:51:46.060496  771291 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:51:46.078392  771291 cli_runner.go:211] docker network inspect default-k8s-diff-port-759696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:51:46.078485  771291 network_create.go:284] running [docker network inspect default-k8s-diff-port-759696] to gather additional debugging logs...
	I1206 09:51:46.078506  771291 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759696
	W1206 09:51:46.096357  771291 cli_runner.go:211] docker network inspect default-k8s-diff-port-759696 returned with exit code 1
	I1206 09:51:46.096394  771291 network_create.go:287] error running [docker network inspect default-k8s-diff-port-759696]: docker network inspect default-k8s-diff-port-759696: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-759696 not found
	I1206 09:51:46.096418  771291 network_create.go:289] output of [docker network inspect default-k8s-diff-port-759696]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-759696 not found
	
	** /stderr **
	I1206 09:51:46.096593  771291 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:51:46.117246  771291 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:51:46.117947  771291 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:51:46.118719  771291 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:51:46.119169  771291 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ff25d0f3f317 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:b4:c0:5d:75:0d} reservation:<nil>}
	I1206 09:51:46.119802  771291 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5d9447c39c3c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e2:61:5e:c6:7b:21} reservation:<nil>}
	I1206 09:51:46.120301  771291 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-335ab24bf651 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:0e:e2:df:e2:5c:3e} reservation:<nil>}
	I1206 09:51:46.121006  771291 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f009b0}
	I1206 09:51:46.121029  771291 network_create.go:124] attempt to create docker network default-k8s-diff-port-759696 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1206 09:51:46.121088  771291 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-759696 default-k8s-diff-port-759696
	I1206 09:51:46.174781  771291 network_create.go:108] docker network default-k8s-diff-port-759696 192.168.103.0/24 created
	I1206 09:51:46.174821  771291 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-759696" container
	I1206 09:51:46.174925  771291 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:51:46.193682  771291 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-759696 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759696 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:51:46.213884  771291 oci.go:103] Successfully created a docker volume default-k8s-diff-port-759696
	I1206 09:51:46.213970  771291 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-759696-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759696 --entrypoint /usr/bin/test -v default-k8s-diff-port-759696:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:51:46.610489  771291 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-759696
	I1206 09:51:46.610606  771291 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:51:46.610626  771291 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:51:46.610723  771291 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759696:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:51:52.096363  760217 node_ready.go:49] node "no-preload-521770" is "Ready"
	I1206 09:51:52.096411  760217 node_ready.go:38] duration metric: took 13.607262986s for node "no-preload-521770" to be "Ready" ...
	I1206 09:51:52.096432  760217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:51:52.096653  760217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:51:52.113233  760217 api_server.go:72] duration metric: took 14.256421781s to wait for apiserver process to appear ...
	I1206 09:51:52.113268  760217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:51:52.113296  760217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:51:52.118724  760217 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1206 09:51:52.120143  760217 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:51:52.120174  760217 api_server.go:131] duration metric: took 6.897312ms to wait for apiserver health ...
	I1206 09:51:52.120186  760217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:51:52.124140  760217 system_pods.go:59] 8 kube-system pods found
	I1206 09:51:52.124172  760217 system_pods.go:61] "coredns-7d764666f9-mhwh5" [a8d7204c-9d11-4944-bc37-a5788a67aaab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:51:52.124178  760217 system_pods.go:61] "etcd-no-preload-521770" [70631f4e-162f-4705-8a60-a85268dc3dcc] Running
	I1206 09:51:52.124184  760217 system_pods.go:61] "kindnet-2w8b5" [6fd87fa0-c550-4070-86fc-32b4938f35da] Running
	I1206 09:51:52.124188  760217 system_pods.go:61] "kube-apiserver-no-preload-521770" [161ebc70-6169-4cac-80c1-74ac9a873e0f] Running
	I1206 09:51:52.124197  760217 system_pods.go:61] "kube-controller-manager-no-preload-521770" [56d3cb3d-a16d-4403-a714-b61ef6ee324c] Running
	I1206 09:51:52.124201  760217 system_pods.go:61] "kube-proxy-t7vrx" [e4a78bfd-8025-45f5-94fa-116ef311de94] Running
	I1206 09:51:52.124204  760217 system_pods.go:61] "kube-scheduler-no-preload-521770" [ab126d16-4ccb-4e6a-bedb-412cc082844f] Running
	I1206 09:51:52.124208  760217 system_pods.go:61] "storage-provisioner" [6be872af-41f0-4aae-adf9-40313b511c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:51:52.124216  760217 system_pods.go:74] duration metric: took 4.022108ms to wait for pod list to return data ...
	I1206 09:51:52.124232  760217 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:51:52.126817  760217 default_sa.go:45] found service account: "default"
	I1206 09:51:52.126839  760217 default_sa.go:55] duration metric: took 2.597482ms for default service account to be created ...
	I1206 09:51:52.126849  760217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:51:52.129797  760217 system_pods.go:86] 8 kube-system pods found
	I1206 09:51:52.129840  760217 system_pods.go:89] "coredns-7d764666f9-mhwh5" [a8d7204c-9d11-4944-bc37-a5788a67aaab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:51:52.129853  760217 system_pods.go:89] "etcd-no-preload-521770" [70631f4e-162f-4705-8a60-a85268dc3dcc] Running
	I1206 09:51:52.129866  760217 system_pods.go:89] "kindnet-2w8b5" [6fd87fa0-c550-4070-86fc-32b4938f35da] Running
	I1206 09:51:52.129872  760217 system_pods.go:89] "kube-apiserver-no-preload-521770" [161ebc70-6169-4cac-80c1-74ac9a873e0f] Running
	I1206 09:51:52.129880  760217 system_pods.go:89] "kube-controller-manager-no-preload-521770" [56d3cb3d-a16d-4403-a714-b61ef6ee324c] Running
	I1206 09:51:52.129885  760217 system_pods.go:89] "kube-proxy-t7vrx" [e4a78bfd-8025-45f5-94fa-116ef311de94] Running
	I1206 09:51:52.129889  760217 system_pods.go:89] "kube-scheduler-no-preload-521770" [ab126d16-4ccb-4e6a-bedb-412cc082844f] Running
	I1206 09:51:52.129894  760217 system_pods.go:89] "storage-provisioner" [6be872af-41f0-4aae-adf9-40313b511c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:51:52.129923  760217 retry.go:31] will retry after 221.571699ms: missing components: kube-dns
	I1206 09:51:52.548598  760217 system_pods.go:86] 8 kube-system pods found
	I1206 09:51:52.548649  760217 system_pods.go:89] "coredns-7d764666f9-mhwh5" [a8d7204c-9d11-4944-bc37-a5788a67aaab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:51:52.548658  760217 system_pods.go:89] "etcd-no-preload-521770" [70631f4e-162f-4705-8a60-a85268dc3dcc] Running
	I1206 09:51:52.548667  760217 system_pods.go:89] "kindnet-2w8b5" [6fd87fa0-c550-4070-86fc-32b4938f35da] Running
	I1206 09:51:52.548673  760217 system_pods.go:89] "kube-apiserver-no-preload-521770" [161ebc70-6169-4cac-80c1-74ac9a873e0f] Running
	I1206 09:51:52.548684  760217 system_pods.go:89] "kube-controller-manager-no-preload-521770" [56d3cb3d-a16d-4403-a714-b61ef6ee324c] Running
	I1206 09:51:52.548689  760217 system_pods.go:89] "kube-proxy-t7vrx" [e4a78bfd-8025-45f5-94fa-116ef311de94] Running
	I1206 09:51:52.548694  760217 system_pods.go:89] "kube-scheduler-no-preload-521770" [ab126d16-4ccb-4e6a-bedb-412cc082844f] Running
	I1206 09:51:52.548702  760217 system_pods.go:89] "storage-provisioner" [6be872af-41f0-4aae-adf9-40313b511c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:51:52.548725  760217 retry.go:31] will retry after 329.100472ms: missing components: kube-dns
	I1206 09:51:52.882282  760217 system_pods.go:86] 8 kube-system pods found
	I1206 09:51:52.882315  760217 system_pods.go:89] "coredns-7d764666f9-mhwh5" [a8d7204c-9d11-4944-bc37-a5788a67aaab] Running
	I1206 09:51:52.882321  760217 system_pods.go:89] "etcd-no-preload-521770" [70631f4e-162f-4705-8a60-a85268dc3dcc] Running
	I1206 09:51:52.882324  760217 system_pods.go:89] "kindnet-2w8b5" [6fd87fa0-c550-4070-86fc-32b4938f35da] Running
	I1206 09:51:52.882328  760217 system_pods.go:89] "kube-apiserver-no-preload-521770" [161ebc70-6169-4cac-80c1-74ac9a873e0f] Running
	I1206 09:51:52.882332  760217 system_pods.go:89] "kube-controller-manager-no-preload-521770" [56d3cb3d-a16d-4403-a714-b61ef6ee324c] Running
	I1206 09:51:52.882335  760217 system_pods.go:89] "kube-proxy-t7vrx" [e4a78bfd-8025-45f5-94fa-116ef311de94] Running
	I1206 09:51:52.882339  760217 system_pods.go:89] "kube-scheduler-no-preload-521770" [ab126d16-4ccb-4e6a-bedb-412cc082844f] Running
	I1206 09:51:52.882342  760217 system_pods.go:89] "storage-provisioner" [6be872af-41f0-4aae-adf9-40313b511c3c] Running
	I1206 09:51:52.882350  760217 system_pods.go:126] duration metric: took 755.495519ms to wait for k8s-apps to be running ...
	I1206 09:51:52.882358  760217 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:51:52.882401  760217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:51:52.896359  760217 system_svc.go:56] duration metric: took 13.991634ms WaitForService to wait for kubelet
	I1206 09:51:52.896390  760217 kubeadm.go:587] duration metric: took 15.039586862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:51:52.896410  760217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:51:52.899856  760217 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:51:52.899885  760217 node_conditions.go:123] node cpu capacity is 8
	I1206 09:51:52.899901  760217 node_conditions.go:105] duration metric: took 3.487398ms to run NodePressure ...
	I1206 09:51:52.899916  760217 start.go:242] waiting for startup goroutines ...
	I1206 09:51:52.899923  760217 start.go:247] waiting for cluster config update ...
	I1206 09:51:52.899933  760217 start.go:256] writing updated cluster config ...
	I1206 09:51:52.900170  760217 ssh_runner.go:195] Run: rm -f paused
	I1206 09:51:52.903876  760217 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:51:52.906986  760217 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mhwh5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:52.910995  760217 pod_ready.go:94] pod "coredns-7d764666f9-mhwh5" is "Ready"
	I1206 09:51:52.911015  760217 pod_ready.go:86] duration metric: took 4.011222ms for pod "coredns-7d764666f9-mhwh5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:52.913250  760217 pod_ready.go:83] waiting for pod "etcd-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:52.917200  760217 pod_ready.go:94] pod "etcd-no-preload-521770" is "Ready"
	I1206 09:51:52.917217  760217 pod_ready.go:86] duration metric: took 3.94657ms for pod "etcd-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:52.919060  760217 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:52.922984  760217 pod_ready.go:94] pod "kube-apiserver-no-preload-521770" is "Ready"
	I1206 09:51:52.923015  760217 pod_ready.go:86] duration metric: took 3.932229ms for pod "kube-apiserver-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:52.924860  760217 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:53.310350  760217 pod_ready.go:94] pod "kube-controller-manager-no-preload-521770" is "Ready"
	I1206 09:51:53.310388  760217 pod_ready.go:86] duration metric: took 385.506298ms for pod "kube-controller-manager-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:53.507891  760217 pod_ready.go:83] waiting for pod "kube-proxy-t7vrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:53.908329  760217 pod_ready.go:94] pod "kube-proxy-t7vrx" is "Ready"
	I1206 09:51:53.908357  760217 pod_ready.go:86] duration metric: took 400.43572ms for pod "kube-proxy-t7vrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:54.108303  760217 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:54.507296  760217 pod_ready.go:94] pod "kube-scheduler-no-preload-521770" is "Ready"
	I1206 09:51:54.507331  760217 pod_ready.go:86] duration metric: took 399.003634ms for pod "kube-scheduler-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:51:54.507344  760217 pod_ready.go:40] duration metric: took 1.603433682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:51:54.554946  760217 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:51:54.556466  760217 out.go:179] * Done! kubectl is now configured to use "no-preload-521770" cluster and "default" namespace by default
	I1206 09:51:52.571220  771042 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-997968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (6.167632042s)
	I1206 09:51:52.571252  771042 kic.go:203] duration metric: took 6.167838175s to extract preloaded images to volume ...
	W1206 09:51:52.571362  771042 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:51:52.571405  771042 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:51:52.571447  771042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:51:52.656351  771042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-997968 --name embed-certs-997968 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-997968 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-997968 --network embed-certs-997968 --ip 192.168.85.2 --volume embed-certs-997968:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:51:53.087640  771042 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Running}}
	I1206 09:51:53.109595  771042 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:51:53.134190  771042 cli_runner.go:164] Run: docker exec embed-certs-997968 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:51:53.187392  771042 oci.go:144] the created container "embed-certs-997968" has a running status.
	I1206 09:51:53.187435  771042 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa...
	I1206 09:51:53.221384  771042 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:51:53.263092  771042 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:51:53.297002  771042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:51:53.297029  771042 kic_runner.go:114] Args: [docker exec --privileged embed-certs-997968 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:51:53.363680  771042 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:51:53.389575  771042 machine.go:94] provisionDockerMachine start ...
	I1206 09:51:53.389676  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:53.410166  771042 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:53.410556  771042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33201 <nil> <nil>}
	I1206 09:51:53.410579  771042 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:51:53.411403  771042 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52460->127.0.0.1:33201: read: connection reset by peer
	I1206 09:51:51.585401  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:51:51.585938  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:51:51.586008  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:51:51.586075  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:51:51.622246  725997 cri.go:89] found id: "ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:51.622269  725997 cri.go:89] found id: ""
	I1206 09:51:51.622279  725997 logs.go:282] 1 containers: [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c]
	I1206 09:51:51.622337  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:51.626143  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:51:51.626221  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:51:51.660613  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:51.660636  725997 cri.go:89] found id: ""
	I1206 09:51:51.660645  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:51:51.660708  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:51.664557  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:51:51.664624  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:51:51.702595  725997 cri.go:89] found id: ""
	I1206 09:51:51.702619  725997 logs.go:282] 0 containers: []
	W1206 09:51:51.702626  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:51:51.702633  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:51:51.702689  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:51:51.739074  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:51.739098  725997 cri.go:89] found id: ""
	I1206 09:51:51.739107  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:51:51.739161  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:51.743117  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:51:51.743177  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:51:51.779415  725997 cri.go:89] found id: ""
	I1206 09:51:51.779439  725997 logs.go:282] 0 containers: []
	W1206 09:51:51.779446  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:51:51.779463  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:51:51.779529  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:51:51.817414  725997 cri.go:89] found id: "f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	I1206 09:51:51.817439  725997 cri.go:89] found id: ""
	I1206 09:51:51.817451  725997 logs.go:282] 1 containers: [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113]
	I1206 09:51:51.817528  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:51.821818  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:51:51.821871  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:51:51.858969  725997 cri.go:89] found id: ""
	I1206 09:51:51.859006  725997 logs.go:282] 0 containers: []
	W1206 09:51:51.859017  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:51:51.859026  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:51:51.859086  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:51:51.898521  725997 cri.go:89] found id: ""
	I1206 09:51:51.898552  725997 logs.go:282] 0 containers: []
	W1206 09:51:51.898562  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:51:51.898581  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:51:51.898599  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:51:51.998825  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:51:51.998867  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:51:52.020515  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:51:52.020550  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:51:52.082057  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:51:52.082080  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:51:52.082093  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:52.123392  725997 logs.go:123] Gathering logs for kube-controller-manager [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113] ...
	I1206 09:51:52.123425  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	I1206 09:51:52.159614  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:51:52.159640  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:51:52.201315  725997 logs.go:123] Gathering logs for kube-apiserver [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c] ...
	I1206 09:51:52.201357  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:52.240979  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:51:52.241020  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:52.323214  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:51:52.323247  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:51:54.862400  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:51:54.862814  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:51:54.862886  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:51:54.862946  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:51:54.899113  725997 cri.go:89] found id: "ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:54.899140  725997 cri.go:89] found id: ""
	I1206 09:51:54.899152  725997 logs.go:282] 1 containers: [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c]
	I1206 09:51:54.899212  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:54.903092  725997 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:51:54.903154  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:51:54.941047  725997 cri.go:89] found id: "296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:54.941072  725997 cri.go:89] found id: ""
	I1206 09:51:54.941084  725997 logs.go:282] 1 containers: [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9]
	I1206 09:51:54.941140  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:54.944910  725997 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:51:54.944976  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:51:54.983037  725997 cri.go:89] found id: ""
	I1206 09:51:54.983065  725997 logs.go:282] 0 containers: []
	W1206 09:51:54.983075  725997 logs.go:284] No container was found matching "coredns"
	I1206 09:51:54.983086  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:51:54.983145  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:51:55.018678  725997 cri.go:89] found id: "93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:55.018750  725997 cri.go:89] found id: ""
	I1206 09:51:55.018769  725997 logs.go:282] 1 containers: [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7]
	I1206 09:51:55.018833  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:55.022944  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:51:55.023010  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:51:55.068945  725997 cri.go:89] found id: ""
	I1206 09:51:55.068973  725997 logs.go:282] 0 containers: []
	W1206 09:51:55.068989  725997 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:51:55.069059  725997 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:51:55.069135  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:51:55.105242  725997 cri.go:89] found id: "f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	I1206 09:51:55.105267  725997 cri.go:89] found id: ""
	I1206 09:51:55.105277  725997 logs.go:282] 1 containers: [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113]
	I1206 09:51:55.105333  725997 ssh_runner.go:195] Run: which crictl
	I1206 09:51:55.109762  725997 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:51:55.109829  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:51:55.146033  725997 cri.go:89] found id: ""
	I1206 09:51:55.146055  725997 logs.go:282] 0 containers: []
	W1206 09:51:55.146063  725997 logs.go:284] No container was found matching "kindnet"
	I1206 09:51:55.146069  725997 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:51:55.146129  725997 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:51:55.191434  725997 cri.go:89] found id: ""
	I1206 09:51:55.191473  725997 logs.go:282] 0 containers: []
	W1206 09:51:55.191486  725997 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:51:55.191506  725997 logs.go:123] Gathering logs for kube-controller-manager [f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113] ...
	I1206 09:51:55.191522  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f169e794628e252f8b59cb10230eea713c49e3eeb84c23e736277f9ed027f113"
	I1206 09:51:55.226877  725997 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:51:55.226905  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:51:55.273245  725997 logs.go:123] Gathering logs for kubelet ...
	I1206 09:51:55.273276  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:51:55.382679  725997 logs.go:123] Gathering logs for dmesg ...
	I1206 09:51:55.382724  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:51:55.405052  725997 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:51:55.405093  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:51:55.468122  725997 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:51:55.468148  725997 logs.go:123] Gathering logs for kube-apiserver [ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c] ...
	I1206 09:51:55.468168  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba45edcfb4ed0b62410553015e466d870e46d63f0549ca6ca68603f033bd5a9c"
	I1206 09:51:55.507334  725997 logs.go:123] Gathering logs for container status ...
	I1206 09:51:55.507365  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:51:55.545933  725997 logs.go:123] Gathering logs for etcd [296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9] ...
	I1206 09:51:55.545963  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 296a6982930411ad814b33197cc3024fb5a9eba32d73af07dafc99db4cdd0ab9"
	I1206 09:51:55.587561  725997 logs.go:123] Gathering logs for kube-scheduler [93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7] ...
	I1206 09:51:55.587600  725997 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93bc055573d8367bb87887b5380b7ada6d3a1da05ad061a0f319cc892da6d9b7"
	I1206 09:51:52.570311  771291 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759696:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.959542248s)
	I1206 09:51:52.570343  771291 kic.go:203] duration metric: took 5.959714705s to extract preloaded images to volume ...
	W1206 09:51:52.570449  771291 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:51:52.570536  771291 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:51:52.570590  771291 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:51:52.656050  771291 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-759696 --name default-k8s-diff-port-759696 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759696 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-759696 --network default-k8s-diff-port-759696 --ip 192.168.103.2 --volume default-k8s-diff-port-759696:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:51:52.979535  771291 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Running}}
	I1206 09:51:53.000947  771291 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:51:53.020644  771291 cli_runner.go:164] Run: docker exec default-k8s-diff-port-759696 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:51:53.072883  771291 oci.go:144] the created container "default-k8s-diff-port-759696" has a running status.
	I1206 09:51:53.072929  771291 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa...
	I1206 09:51:53.171440  771291 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:51:53.204030  771291 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:51:53.231154  771291 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:51:53.231179  771291 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-759696 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:51:53.303307  771291 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:51:53.340912  771291 machine.go:94] provisionDockerMachine start ...
	I1206 09:51:53.341100  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:53.366534  771291 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:53.367221  771291 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1206 09:51:53.367266  771291 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:51:53.517300  771291 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759696
	
	I1206 09:51:53.517328  771291 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-759696"
	I1206 09:51:53.517395  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:53.539856  771291 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:53.540146  771291 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1206 09:51:53.540166  771291 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-759696 && echo "default-k8s-diff-port-759696" | sudo tee /etc/hostname
	I1206 09:51:53.684686  771291 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759696
	
	I1206 09:51:53.684761  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:53.706650  771291 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:53.706982  771291 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1206 09:51:53.707015  771291 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-759696' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-759696/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-759696' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:51:53.838991  771291 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:51:53.839022  771291 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:51:53.839060  771291 ubuntu.go:190] setting up certificates
	I1206 09:51:53.839074  771291 provision.go:84] configureAuth start
	I1206 09:51:53.839141  771291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759696
	I1206 09:51:53.856591  771291 provision.go:143] copyHostCerts
	I1206 09:51:53.856664  771291 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:51:53.856676  771291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:51:53.856756  771291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:51:53.856879  771291 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:51:53.856892  771291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:51:53.856925  771291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:51:53.857007  771291 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:51:53.857020  771291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:51:53.857059  771291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:51:53.857139  771291 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-759696 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-759696 localhost minikube]
	I1206 09:51:53.948415  771291 provision.go:177] copyRemoteCerts
	I1206 09:51:53.948498  771291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:51:53.948549  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:53.966488  771291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:51:54.060834  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:51:54.080650  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1206 09:51:54.098145  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:51:54.116590  771291 provision.go:87] duration metric: took 277.498539ms to configureAuth
	I1206 09:51:54.116614  771291 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:51:54.116762  771291 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:51:54.116862  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:54.134436  771291 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:54.134736  771291 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1206 09:51:54.134763  771291 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:51:54.404962  771291 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:51:54.404994  771291 machine.go:97] duration metric: took 1.064005368s to provisionDockerMachine
	I1206 09:51:54.405006  771291 client.go:176] duration metric: took 8.345055128s to LocalClient.Create
	I1206 09:51:54.405029  771291 start.go:167] duration metric: took 8.345121494s to libmachine.API.Create "default-k8s-diff-port-759696"
	I1206 09:51:54.405038  771291 start.go:293] postStartSetup for "default-k8s-diff-port-759696" (driver="docker")
	I1206 09:51:54.405057  771291 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:51:54.405123  771291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:51:54.405175  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:54.422910  771291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:51:54.518852  771291 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:51:54.522684  771291 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:51:54.522715  771291 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:51:54.522728  771291 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:51:54.522781  771291 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:51:54.522874  771291 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:51:54.522995  771291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:51:54.531210  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:51:54.551275  771291 start.go:296] duration metric: took 146.201105ms for postStartSetup
	I1206 09:51:54.551739  771291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759696
	I1206 09:51:54.571185  771291 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/config.json ...
	I1206 09:51:54.571443  771291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:51:54.571547  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:54.596514  771291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:51:54.688988  771291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:51:54.694235  771291 start.go:128] duration metric: took 8.636396136s to createHost
	I1206 09:51:54.694257  771291 start.go:83] releasing machines lock for "default-k8s-diff-port-759696", held for 8.636506918s
	I1206 09:51:54.694315  771291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759696
	I1206 09:51:54.714953  771291 ssh_runner.go:195] Run: cat /version.json
	I1206 09:51:54.714999  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:54.715029  771291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:51:54.715138  771291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:51:54.737899  771291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:51:54.738780  771291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:51:54.880208  771291 ssh_runner.go:195] Run: systemctl --version
	I1206 09:51:54.887624  771291 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:51:54.925603  771291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:51:54.930761  771291 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:51:54.930841  771291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:51:54.957904  771291 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:51:54.957932  771291 start.go:496] detecting cgroup driver to use...
	I1206 09:51:54.957969  771291 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:51:54.958020  771291 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:51:54.974921  771291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:51:54.988624  771291 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:51:54.988667  771291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:51:55.005484  771291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:51:55.025882  771291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:51:55.119268  771291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:51:55.218296  771291 docker.go:234] disabling docker service ...
	I1206 09:51:55.218370  771291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:51:55.240286  771291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:51:55.254507  771291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:51:55.340974  771291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:51:55.427707  771291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:51:55.440958  771291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:51:55.455101  771291 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:51:55.455168  771291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:55.466290  771291 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:51:55.466357  771291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:55.475590  771291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:55.484500  771291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:55.493658  771291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:51:55.503428  771291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:55.512329  771291 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:55.526010  771291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:55.535311  771291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:51:55.543523  771291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:51:55.551182  771291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:51:55.641830  771291 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:51:55.791979  771291 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:51:55.792056  771291 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:51:55.796227  771291 start.go:564] Will wait 60s for crictl version
	I1206 09:51:55.796357  771291 ssh_runner.go:195] Run: which crictl
	I1206 09:51:55.800271  771291 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:51:55.825094  771291 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:51:55.825159  771291 ssh_runner.go:195] Run: crio --version
	I1206 09:51:55.853355  771291 ssh_runner.go:195] Run: crio --version
	I1206 09:51:55.881350  771291 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:51:55.882704  771291 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:51:55.900116  771291 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:51:55.904361  771291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:51:55.914626  771291 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:51:55.914754  771291 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:51:55.914805  771291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:51:55.948189  771291 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:51:55.948211  771291 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:51:55.948256  771291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:51:55.975003  771291 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:51:55.975022  771291 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:51:55.975031  771291 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1206 09:51:55.975118  771291 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-759696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:51:55.975180  771291 ssh_runner.go:195] Run: crio config
	I1206 09:51:56.021880  771291 cni.go:84] Creating CNI manager for ""
	I1206 09:51:56.021906  771291 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:51:56.021928  771291 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:51:56.021951  771291 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-759696 NodeName:default-k8s-diff-port-759696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:51:56.022067  771291 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-759696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:51:56.022130  771291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:51:56.030536  771291 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:51:56.030595  771291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:51:56.039148  771291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1206 09:51:56.051837  771291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:51:56.066606  771291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1206 09:51:56.079055  771291 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:51:56.082643  771291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:51:56.092336  771291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:51:56.173676  771291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:51:56.198899  771291 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696 for IP: 192.168.103.2
	I1206 09:51:56.198922  771291 certs.go:195] generating shared ca certs ...
	I1206 09:51:56.198938  771291 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:56.199106  771291 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:51:56.199199  771291 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:51:56.199215  771291 certs.go:257] generating profile certs ...
	I1206 09:51:56.199292  771291 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.key
	I1206 09:51:56.199311  771291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.crt with IP's: []
	I1206 09:51:56.243446  771291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.crt ...
	I1206 09:51:56.243498  771291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.crt: {Name:mk5659c0f4c7adb637d36d6e6f523b3246941815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:56.243696  771291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.key ...
	I1206 09:51:56.243715  771291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.key: {Name:mk8d8b3f349f1c2aecea3ffe0c95297b716d03ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:56.243809  771291 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key.e015ec9a
	I1206 09:51:56.243828  771291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt.e015ec9a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1206 09:51:56.349242  771291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt.e015ec9a ...
	I1206 09:51:56.349273  771291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt.e015ec9a: {Name:mkc01bf36fc22107dd4ce13da782a160b54996c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:56.349450  771291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key.e015ec9a ...
	I1206 09:51:56.349490  771291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key.e015ec9a: {Name:mkea3af67684e4676e391a4b63a151ad77800e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:56.349593  771291 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt.e015ec9a -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt
	I1206 09:51:56.349700  771291 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key.e015ec9a -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key
	I1206 09:51:56.349757  771291 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key
	I1206 09:51:56.349772  771291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.crt with IP's: []
	I1206 09:51:56.390566  771291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.crt ...
	I1206 09:51:56.390596  771291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.crt: {Name:mk50e527ad586d5456f7b55b6e44eabe5796b21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:56.390758  771291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key ...
	I1206 09:51:56.390777  771291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key: {Name:mkb252e7e30c9a63b019e419c2fc5d89f30ac665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:56.390956  771291 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:51:56.390994  771291 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:51:56.391005  771291 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:51:56.391031  771291 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:51:56.391056  771291 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:51:56.391079  771291 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:51:56.391119  771291 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:51:56.391781  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:51:56.410617  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:51:56.428759  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:51:56.446965  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:51:56.465508  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1206 09:51:56.484262  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:51:56.502030  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:51:56.519703  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:51:56.540314  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:51:56.563670  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:51:56.582289  771291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:51:56.600872  771291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:51:56.614181  771291 ssh_runner.go:195] Run: openssl version
	I1206 09:51:56.620541  771291 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:51:56.627929  771291 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:51:56.635547  771291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:51:56.639543  771291 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:51:56.639614  771291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:51:56.675037  771291 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:51:56.683397  771291 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:51:56.691585  771291 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:56.699791  771291 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:51:56.707903  771291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:56.711799  771291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:56.711855  771291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:56.748835  771291 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:51:56.757027  771291 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:51:56.765170  771291 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:51:56.773263  771291 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:51:56.781111  771291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:51:56.785196  771291 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:51:56.785263  771291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:51:56.819543  771291 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:51:56.827825  771291 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:51:56.835411  771291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:51:56.839215  771291 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:51:56.839280  771291 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:51:56.839364  771291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:51:56.839420  771291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:51:56.870895  771291 cri.go:89] found id: ""
	I1206 09:51:56.870954  771291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:51:56.879509  771291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:51:56.887416  771291 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:51:56.887501  771291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:51:56.895281  771291 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:51:56.895307  771291 kubeadm.go:158] found existing configuration files:
	
	I1206 09:51:56.895349  771291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1206 09:51:56.903080  771291 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:51:56.903137  771291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:51:56.910693  771291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1206 09:51:56.918405  771291 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:51:56.918463  771291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:51:56.925850  771291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1206 09:51:56.934623  771291 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:51:56.934684  771291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:51:56.942748  771291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1206 09:51:56.950566  771291 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:51:56.950641  771291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:51:56.958180  771291 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:51:56.997208  771291 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:51:56.997287  771291 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:51:57.035797  771291 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:51:57.035900  771291 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:51:57.035983  771291 kubeadm.go:319] OS: Linux
	I1206 09:51:57.036053  771291 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:51:57.036209  771291 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:51:57.036335  771291 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:51:57.036479  771291 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:51:57.036547  771291 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:51:57.036606  771291 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:51:57.036668  771291 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:51:57.036725  771291 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:51:57.110649  771291 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:51:57.110805  771291 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:51:57.110952  771291 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:51:57.120254  771291 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:51:56.538609  771042 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-997968
	
	I1206 09:51:56.538638  771042 ubuntu.go:182] provisioning hostname "embed-certs-997968"
	I1206 09:51:56.538695  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:56.559034  771042 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:56.559276  771042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33201 <nil> <nil>}
	I1206 09:51:56.559290  771042 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-997968 && echo "embed-certs-997968" | sudo tee /etc/hostname
	I1206 09:51:56.699932  771042 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-997968
	
	I1206 09:51:56.700016  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:56.720795  771042 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:56.721122  771042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33201 <nil> <nil>}
	I1206 09:51:56.721153  771042 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-997968' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-997968/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-997968' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:51:56.849892  771042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:51:56.849923  771042 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:51:56.849949  771042 ubuntu.go:190] setting up certificates
	I1206 09:51:56.849963  771042 provision.go:84] configureAuth start
	I1206 09:51:56.850034  771042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-997968
	I1206 09:51:56.869116  771042 provision.go:143] copyHostCerts
	I1206 09:51:56.869175  771042 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:51:56.869191  771042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:51:56.869268  771042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:51:56.869377  771042 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:51:56.869391  771042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:51:56.869434  771042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:51:56.869546  771042 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:51:56.869560  771042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:51:56.869609  771042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:51:56.869713  771042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.embed-certs-997968 san=[127.0.0.1 192.168.85.2 embed-certs-997968 localhost minikube]
	I1206 09:51:56.915743  771042 provision.go:177] copyRemoteCerts
	I1206 09:51:56.915800  771042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:51:56.915840  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:56.936291  771042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33201 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:51:57.033033  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:51:57.058877  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:51:57.080883  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:51:57.102395  771042 provision.go:87] duration metric: took 252.410612ms to configureAuth
	I1206 09:51:57.102428  771042 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:51:57.102671  771042 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:51:57.102806  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:57.126835  771042 main.go:143] libmachine: Using SSH client type: native
	I1206 09:51:57.127141  771042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33201 <nil> <nil>}
	I1206 09:51:57.127167  771042 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:51:57.419194  771042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:51:57.419224  771042 machine.go:97] duration metric: took 4.029623403s to provisionDockerMachine
	I1206 09:51:57.419238  771042 client.go:176] duration metric: took 11.603809723s to LocalClient.Create
	I1206 09:51:57.419263  771042 start.go:167] duration metric: took 11.603892942s to libmachine.API.Create "embed-certs-997968"
	I1206 09:51:57.419277  771042 start.go:293] postStartSetup for "embed-certs-997968" (driver="docker")
	I1206 09:51:57.419291  771042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:51:57.419381  771042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:51:57.419451  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:57.438484  771042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33201 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:51:57.533189  771042 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:51:57.536644  771042 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:51:57.536668  771042 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:51:57.536679  771042 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:51:57.536732  771042 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:51:57.536819  771042 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:51:57.536934  771042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:51:57.544447  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:51:57.564186  771042 start.go:296] duration metric: took 144.882001ms for postStartSetup
	I1206 09:51:57.564565  771042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-997968
	I1206 09:51:57.583234  771042 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/config.json ...
	I1206 09:51:57.583541  771042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:51:57.583608  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:57.601771  771042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33201 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:51:57.692327  771042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:51:57.697299  771042 start.go:128] duration metric: took 11.883924355s to createHost
	I1206 09:51:57.697330  771042 start.go:83] releasing machines lock for "embed-certs-997968", held for 11.88406873s
	I1206 09:51:57.697408  771042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-997968
	I1206 09:51:57.716426  771042 ssh_runner.go:195] Run: cat /version.json
	I1206 09:51:57.716523  771042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:51:57.716614  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:57.716525  771042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:51:57.737170  771042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33201 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:51:57.737863  771042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33201 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:51:57.832140  771042 ssh_runner.go:195] Run: systemctl --version
	I1206 09:51:57.884374  771042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:51:57.921798  771042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:51:57.926911  771042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:51:57.926984  771042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:51:57.953347  771042 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:51:57.953378  771042 start.go:496] detecting cgroup driver to use...
	I1206 09:51:57.953416  771042 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:51:57.953472  771042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:51:57.970801  771042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:51:57.983014  771042 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:51:57.983073  771042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:51:57.999357  771042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:51:58.016626  771042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:51:58.098710  771042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:51:58.191354  771042 docker.go:234] disabling docker service ...
	I1206 09:51:58.191432  771042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:51:58.214591  771042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:51:58.228033  771042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:51:58.310528  771042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:51:58.390502  771042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:51:58.403210  771042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:51:58.417155  771042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:51:58.417229  771042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:58.427367  771042 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:51:58.427426  771042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:58.436275  771042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:58.444899  771042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:58.453306  771042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:51:58.461200  771042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:58.469518  771042 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:58.482573  771042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:51:58.491068  771042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:51:58.498304  771042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:51:58.505626  771042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:51:58.595930  771042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:51:58.738015  771042 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:51:58.738089  771042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:51:58.742219  771042 start.go:564] Will wait 60s for crictl version
	I1206 09:51:58.742280  771042 ssh_runner.go:195] Run: which crictl
	I1206 09:51:58.746334  771042 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:51:58.775129  771042 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:51:58.775209  771042 ssh_runner.go:195] Run: crio --version
	I1206 09:51:58.807135  771042 ssh_runner.go:195] Run: crio --version
	I1206 09:51:58.837902  771042 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:51:58.172535  725997 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:51:58.173024  725997 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 09:51:58.173107  725997 kubeadm.go:602] duration metric: took 4m5.148925821s to restartPrimaryControlPlane
	W1206 09:51:58.173184  725997 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 09:51:58.173252  725997 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 09:51:58.964578  725997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:51:58.976985  725997 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:51:58.986362  725997 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:51:58.986431  725997 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:51:58.995564  725997 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:51:58.995580  725997 kubeadm.go:158] found existing configuration files:
	
	I1206 09:51:58.995618  725997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:51:59.006194  725997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:51:59.006258  725997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:51:59.015679  725997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:51:59.024973  725997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:51:59.025021  725997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:51:59.033710  725997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:51:59.042927  725997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:51:59.042979  725997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:51:59.051871  725997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:51:59.061063  725997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:51:59.061107  725997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:51:59.070568  725997 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:51:59.120397  725997 kubeadm.go:319] [init] Using Kubernetes version: v1.32.0
	I1206 09:51:59.120484  725997 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:51:59.140650  725997 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:51:59.140751  725997 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:51:59.140801  725997 kubeadm.go:319] OS: Linux
	I1206 09:51:59.140862  725997 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:51:59.140932  725997 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:51:59.141018  725997 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:51:59.141097  725997 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:51:59.141162  725997 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:51:59.141233  725997 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:51:59.141277  725997 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:51:59.141342  725997 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:51:59.201104  725997 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:51:59.201241  725997 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:51:59.201360  725997 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:51:59.208300  725997 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:51:59.211584  725997 out.go:252]   - Generating certificates and keys ...
	I1206 09:51:59.211688  725997 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:51:59.211826  725997 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:51:59.211971  725997 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 09:51:59.212056  725997 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 09:51:59.212153  725997 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 09:51:59.212219  725997 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 09:51:59.212307  725997 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 09:51:59.212397  725997 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 09:51:59.212532  725997 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 09:51:59.212658  725997 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 09:51:59.212723  725997 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 09:51:59.212804  725997 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:51:59.527991  725997 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:51:59.585869  725997 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:51:59.732107  725997 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:52:00.185658  725997 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:52:00.339778  725997 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:52:00.340279  725997 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:52:00.342662  725997 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:51:58.838892  771042 cli_runner.go:164] Run: docker network inspect embed-certs-997968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:51:58.856877  771042 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:51:58.861199  771042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:51:58.873254  771042 kubeadm.go:884] updating cluster {Name:embed-certs-997968 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-997968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:51:58.873368  771042 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:51:58.873407  771042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:51:58.909416  771042 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:51:58.909437  771042 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:51:58.909500  771042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:51:58.940691  771042 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:51:58.940715  771042 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:51:58.940724  771042 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1206 09:51:58.940814  771042 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-997968 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-997968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:51:58.940876  771042 ssh_runner.go:195] Run: crio config
	I1206 09:51:59.002961  771042 cni.go:84] Creating CNI manager for ""
	I1206 09:51:59.002984  771042 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:51:59.003007  771042 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:51:59.003040  771042 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-997968 NodeName:embed-certs-997968 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:51:59.003208  771042 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-997968"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:51:59.003287  771042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:51:59.012094  771042 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:51:59.012156  771042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:51:59.020787  771042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1206 09:51:59.033871  771042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:51:59.049617  771042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1206 09:51:59.062724  771042 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:51:59.066466  771042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:51:59.076676  771042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:51:59.177094  771042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:51:59.208568  771042 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968 for IP: 192.168.85.2
	I1206 09:51:59.208590  771042 certs.go:195] generating shared ca certs ...
	I1206 09:51:59.208610  771042 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:59.208769  771042 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:51:59.208831  771042 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:51:59.208845  771042 certs.go:257] generating profile certs ...
	I1206 09:51:59.208936  771042 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/client.key
	I1206 09:51:59.208956  771042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/client.crt with IP's: []
	I1206 09:51:59.295009  771042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/client.crt ...
	I1206 09:51:59.295037  771042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/client.crt: {Name:mkeaf997cf70c6e26f09e9cee1ad6f91da9ea5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:59.295242  771042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/client.key ...
	I1206 09:51:59.295260  771042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/client.key: {Name:mkc8f7fd8af844c52ef84a7f00ecc166cb24328d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:59.295395  771042 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.key.47d48c55
	I1206 09:51:59.295419  771042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.crt.47d48c55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:51:59.515853  771042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.crt.47d48c55 ...
	I1206 09:51:59.515882  771042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.crt.47d48c55: {Name:mk70392915e8b1dbb32bd2be9c5bb1aecd97f457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:59.516070  771042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.key.47d48c55 ...
	I1206 09:51:59.516087  771042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.key.47d48c55: {Name:mk91332304a3aab13c67a1a5801f202e1a513ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:59.516190  771042 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.crt.47d48c55 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.crt
	I1206 09:51:59.516286  771042 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.key.47d48c55 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.key
	I1206 09:51:59.516351  771042 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.key
	I1206 09:51:59.516368  771042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.crt with IP's: []
	I1206 09:51:59.557377  771042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.crt ...
	I1206 09:51:59.557400  771042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.crt: {Name:mkcc77029f03a30c983de87e176d91b7a5883c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:59.557580  771042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.key ...
	I1206 09:51:59.557600  771042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.key: {Name:mk754196b31fec85bb300f81f2198b5de5bda879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:51:59.557812  771042 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:51:59.557852  771042 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:51:59.557862  771042 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:51:59.557889  771042 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:51:59.557913  771042 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:51:59.557936  771042 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:51:59.557977  771042 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:51:59.558565  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:51:59.577181  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:51:59.594343  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:51:59.611240  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:51:59.628621  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1206 09:51:59.645758  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:51:59.662750  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:51:59.680015  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:51:59.697633  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:51:59.716170  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:51:59.735517  771042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:51:59.752714  771042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:51:59.765355  771042 ssh_runner.go:195] Run: openssl version
	I1206 09:51:59.771583  771042 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:51:59.778845  771042 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:51:59.786433  771042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:51:59.790405  771042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:51:59.790467  771042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:51:59.824581  771042 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:51:59.832874  771042 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:51:59.840869  771042 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:51:59.848407  771042 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:51:59.856090  771042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:51:59.859976  771042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:51:59.860037  771042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:51:59.895209  771042 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:51:59.903438  771042 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:51:59.910935  771042 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:59.918336  771042 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:51:59.925786  771042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:59.929420  771042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:59.929484  771042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:51:59.963489  771042 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:51:59.970961  771042 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:51:59.978366  771042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:51:59.981985  771042 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:51:59.982055  771042 kubeadm.go:401] StartCluster: {Name:embed-certs-997968 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-997968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:51:59.982145  771042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:51:59.982191  771042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:52:00.009417  771042 cri.go:89] found id: ""
	I1206 09:52:00.009512  771042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:52:00.017828  771042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:52:00.025609  771042 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:52:00.025667  771042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:52:00.033838  771042 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:52:00.033860  771042 kubeadm.go:158] found existing configuration files:
	
	I1206 09:52:00.033900  771042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:52:00.043902  771042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:52:00.043964  771042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:52:00.052747  771042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:52:00.061759  771042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:52:00.061800  771042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:52:00.070505  771042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:52:00.078890  771042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:52:00.078931  771042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:52:00.087409  771042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:52:00.095776  771042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:52:00.095830  771042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:52:00.103318  771042 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:52:00.166935  771042 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:52:00.233170  771042 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:51:57.123054  771291 out.go:252]   - Generating certificates and keys ...
	I1206 09:51:57.123202  771291 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:51:57.123329  771291 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:51:57.603966  771291 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:51:57.665969  771291 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:51:58.021240  771291 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:51:58.273437  771291 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:51:58.303673  771291 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:51:58.303881  771291 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-759696 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1206 09:51:58.542258  771291 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:51:58.542488  771291 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-759696 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1206 09:51:58.599932  771291 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:51:58.854912  771291 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:51:58.993430  771291 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:51:58.993553  771291 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:51:59.514787  771291 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:51:59.926623  771291 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:52:00.335490  771291 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:52:00.515146  771291 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:52:00.614021  771291 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:52:00.614741  771291 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:52:00.619181  771291 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:52:00.345175  725997 out.go:252]   - Booting up control plane ...
	I1206 09:52:00.345313  725997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:52:00.345582  725997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:52:00.346551  725997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:52:00.356605  725997 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:52:00.362913  725997 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:52:00.362978  725997 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:52:00.452998  725997 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:52:00.453171  725997 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:52:00.622870  771291 out.go:252]   - Booting up control plane ...
	I1206 09:52:00.623002  771291 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:52:00.623119  771291 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:52:00.623204  771291 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:52:00.638888  771291 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:52:00.639281  771291 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:52:00.647221  771291 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:52:00.647382  771291 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:52:00.647475  771291 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:52:00.760624  771291 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:52:00.760815  771291 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:52:01.454680  725997 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001775903s
	I1206 09:52:01.454825  725997 kubeadm.go:319] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1206 09:52:05.456920  725997 kubeadm.go:319] [api-check] The API server is healthy after 4.002263328s
	I1206 09:52:05.469527  725997 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:52:05.479487  725997 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:52:05.497813  725997 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:52:05.498003  725997 kubeadm.go:319] [mark-control-plane] Marking the node stopped-upgrade-031481 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:52:05.504971  725997 kubeadm.go:319] [bootstrap-token] Using token: 465nps.0ehumo9ysa1xh726
	I1206 09:52:05.506528  725997 out.go:252]   - Configuring RBAC rules ...
	I1206 09:52:05.506691  725997 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:52:05.510167  725997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:52:05.514779  725997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:52:05.517128  725997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:52:05.519348  725997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:52:05.521553  725997 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	
	
	==> CRI-O <==
	Dec 06 09:51:52 no-preload-521770 crio[771]: time="2025-12-06T09:51:52.589734208Z" level=info msg="Started container" PID=2825 containerID=a5c3482879fde54c6fc7d2d2797b8852b874959cc42ea00f8d04b4590d42c96e description=kube-system/storage-provisioner/storage-provisioner id=3c2d4774-4bc2-4a16-a664-c8d90d748211 name=/runtime.v1.RuntimeService/StartContainer sandboxID=547146bf0e7576d5937bc55e5cd578e6675ae04006d2136a9a2623049ec7f760
	Dec 06 09:51:52 no-preload-521770 crio[771]: time="2025-12-06T09:51:52.59236444Z" level=info msg="Started container" PID=2826 containerID=3834bb9863a5c6d928e14d533a9df55321538bda9269639f16956e0e54eba8bf description=kube-system/coredns-7d764666f9-mhwh5/coredns id=f997e187-a74d-4118-ac18-b6bbfd26357a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e0ac597cff06f29b431e8a508af21ec2c37d6377610eee6627d286a455b3dd4
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.024630866Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1a800902-9bae-4c54-9065-da4c328ad526 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.02470295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.030240356Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:624c094e5eb0332b595ad01ea1810555458658e8c818ec7cf015ff335d409b07 UID:a011c5ce-2ff8-4279-bed5-cf9ec25a1eb0 NetNS:/var/run/netns/316c4b89-52ef-46ac-b03e-7339390dd52d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d165b8}] Aliases:map[]}"
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.03026912Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.040436017Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:624c094e5eb0332b595ad01ea1810555458658e8c818ec7cf015ff335d409b07 UID:a011c5ce-2ff8-4279-bed5-cf9ec25a1eb0 NetNS:/var/run/netns/316c4b89-52ef-46ac-b03e-7339390dd52d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d165b8}] Aliases:map[]}"
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.040635389Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.041677403Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.042945893Z" level=info msg="Ran pod sandbox 624c094e5eb0332b595ad01ea1810555458658e8c818ec7cf015ff335d409b07 with infra container: default/busybox/POD" id=1a800902-9bae-4c54-9065-da4c328ad526 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.044165193Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b83f7cb1-66fd-4a5b-ada1-c55ee40c0ba6 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.044287172Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b83f7cb1-66fd-4a5b-ada1-c55ee40c0ba6 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.044317069Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b83f7cb1-66fd-4a5b-ada1-c55ee40c0ba6 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.045042584Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ba7b51d-5d07-4c8e-9440-07b182112a22 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:51:55 no-preload-521770 crio[771]: time="2025-12-06T09:51:55.04761238Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.167479837Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0ba7b51d-5d07-4c8e-9440-07b182112a22 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.168097562Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fb24a375-c7f9-41ac-b7f4-849a10b640af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.169976679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fa5c6cfd-4c80-46a0-bc9e-7f4de6d31c76 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.173192488Z" level=info msg="Creating container: default/busybox/busybox" id=a16a30c4-016f-4c75-810e-3cecde650731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.173320797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.177700255Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.178369667Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.212437652Z" level=info msg="Created container f8ed03d924a2d467c44829d3bb7e3e07143b8e93bbb1664a23a119a6a4dc39fa: default/busybox/busybox" id=a16a30c4-016f-4c75-810e-3cecde650731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.213075221Z" level=info msg="Starting container: f8ed03d924a2d467c44829d3bb7e3e07143b8e93bbb1664a23a119a6a4dc39fa" id=5840ecc0-9deb-44a1-88ce-53a7f675d6b2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:51:57 no-preload-521770 crio[771]: time="2025-12-06T09:51:57.214748157Z" level=info msg="Started container" PID=2896 containerID=f8ed03d924a2d467c44829d3bb7e3e07143b8e93bbb1664a23a119a6a4dc39fa description=default/busybox/busybox id=5840ecc0-9deb-44a1-88ce-53a7f675d6b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=624c094e5eb0332b595ad01ea1810555458658e8c818ec7cf015ff335d409b07
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f8ed03d924a2d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   624c094e5eb03       busybox                                     default
	3834bb9863a5c       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   1e0ac597cff06       coredns-7d764666f9-mhwh5                    kube-system
	a5c3482879fde       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   547146bf0e757       storage-provisioner                         kube-system
	82884774b5d9d       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   d91451584d007       kindnet-2w8b5                               kube-system
	902d582064408       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      28 seconds ago      Running             kube-proxy                0                   134f4fe7c6af3       kube-proxy-t7vrx                            kube-system
	1cc3102b1aaf3       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      39 seconds ago      Running             kube-scheduler            0                   55a8d33ba0fdf       kube-scheduler-no-preload-521770            kube-system
	ec112f259b785       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      39 seconds ago      Running             kube-controller-manager   0                   feea79cd26070       kube-controller-manager-no-preload-521770   kube-system
	a9191c4fe80ad       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      39 seconds ago      Running             kube-apiserver            0                   89343b73aef3c       kube-apiserver-no-preload-521770            kube-system
	be80a0cc6bb0a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      39 seconds ago      Running             etcd                      0                   de64df41c7136       etcd-no-preload-521770                      kube-system
	
	
	==> coredns [3834bb9863a5c6d928e14d533a9df55321538bda9269639f16956e0e54eba8bf] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46284 - 54978 "HINFO IN 2539695480448441571.6507391666944338116. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036238593s
	
	
	==> describe nodes <==
	Name:               no-preload-521770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-521770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=no-preload-521770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_51_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:51:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-521770
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:52:02 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:52:02 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:52:02 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:52:02 +0000   Sat, 06 Dec 2025 09:51:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-521770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                77a79082-49d3-48ca-89e8-de80a1e12164
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-7d764666f9-mhwh5                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-521770                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-2w8b5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-521770             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-521770    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-t7vrx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-521770             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node no-preload-521770 event: Registered Node no-preload-521770 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [be80a0cc6bb0a15e4636b40bfa30d4b496b14529433f25657c30dea7812e3581] <==
	{"level":"warn","ts":"2025-12-06T09:51:29.779215Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"260.636799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766650046767409 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-06T09:51:29.779393Z","caller":"traceutil/trace.go:172","msg":"trace[120947589] transaction","detail":"{read_only:false; response_revision:104; number_of_response:1; }","duration":"337.545177ms","start":"2025-12-06T09:51:29.441832Z","end":"2025-12-06T09:51:29.779377Z","steps":["trace[120947589] 'process raft request'  (duration: 76.244437ms)","trace[120947589] 'compare'  (duration: 260.506698ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:51:29.779514Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:51:29.441819Z","time spent":"337.631034ms","remote":"127.0.0.1:58320","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":464,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>"}
	{"level":"warn","ts":"2025-12-06T09:51:49.595414Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.104718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-521770\" limit:1 ","response":"range_response_count:1 size:4559"}
	{"level":"info","ts":"2025-12-06T09:51:49.595522Z","caller":"traceutil/trace.go:172","msg":"trace[117107529] range","detail":"{range_begin:/registry/minions/no-preload-521770; range_end:; response_count:1; response_revision:435; }","duration":"104.223734ms","start":"2025-12-06T09:51:49.491282Z","end":"2025-12-06T09:51:49.595506Z","steps":["trace[117107529] 'range keys from in-memory index tree'  (duration: 103.966959ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:51:50.791750Z","caller":"traceutil/trace.go:172","msg":"trace[1987146217] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"147.116428ms","start":"2025-12-06T09:51:50.644612Z","end":"2025-12-06T09:51:50.791729Z","steps":["trace[1987146217] 'process raft request'  (duration: 89.767913ms)","trace[1987146217] 'compare'  (duration: 57.241662ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:51:51.152806Z","caller":"traceutil/trace.go:172","msg":"trace[30231842] linearizableReadLoop","detail":"{readStateIndex:453; appliedIndex:453; }","duration":"161.14044ms","start":"2025-12-06T09:51:50.991637Z","end":"2025-12-06T09:51:51.152777Z","steps":["trace[30231842] 'read index received'  (duration: 161.134306ms)","trace[30231842] 'applied index is now lower than readState.Index'  (duration: 5.425µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:51:51.153044Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.394493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-521770\" limit:1 ","response":"range_response_count:1 size:4559"}
	{"level":"info","ts":"2025-12-06T09:51:51.153097Z","caller":"traceutil/trace.go:172","msg":"trace[1367143691] range","detail":"{range_begin:/registry/minions/no-preload-521770; range_end:; response_count:1; response_revision:436; }","duration":"161.46272ms","start":"2025-12-06T09:51:50.991626Z","end":"2025-12-06T09:51:51.153089Z","steps":["trace[1367143691] 'agreement among raft nodes before linearized reading'  (duration: 161.275835ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:51:51.153137Z","caller":"traceutil/trace.go:172","msg":"trace[1205513415] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"190.772053ms","start":"2025-12-06T09:51:50.962346Z","end":"2025-12-06T09:51:51.153119Z","steps":["trace[1205513415] 'process raft request'  (duration: 190.506856ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:51:51.153067Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.171471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:51:51.153213Z","caller":"traceutil/trace.go:172","msg":"trace[384229998] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:437; }","duration":"119.325311ms","start":"2025-12-06T09:51:51.033881Z","end":"2025-12-06T09:51:51.153207Z","steps":["trace[384229998] 'agreement among raft nodes before linearized reading'  (duration: 119.157995ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:51:51.304029Z","caller":"traceutil/trace.go:172","msg":"trace[1109621768] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"143.147861ms","start":"2025-12-06T09:51:51.160866Z","end":"2025-12-06T09:51:51.304014Z","steps":["trace[1109621768] 'process raft request'  (duration: 143.055901ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:51:51.542257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.321298ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766650046768248 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-521770\" mod_revision:437 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-521770\" value_size:7226 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-521770\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:51:51.542499Z","caller":"traceutil/trace.go:172","msg":"trace[545190109] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"230.355031ms","start":"2025-12-06T09:51:51.312121Z","end":"2025-12-06T09:51:51.542476Z","steps":["trace[545190109] 'process raft request'  (duration: 118.615766ms)","trace[545190109] 'compare'  (duration: 111.194925ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:51:51.752148Z","caller":"traceutil/trace.go:172","msg":"trace[612365435] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"202.08104ms","start":"2025-12-06T09:51:51.550047Z","end":"2025-12-06T09:51:51.752128Z","steps":["trace[612365435] 'process raft request'  (duration: 200.665554ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:51:51.811616Z","caller":"traceutil/trace.go:172","msg":"trace[2069658547] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"133.206378ms","start":"2025-12-06T09:51:51.678394Z","end":"2025-12-06T09:51:51.811601Z","steps":["trace[2069658547] 'process raft request'  (duration: 132.970329ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:51:51.811607Z","caller":"traceutil/trace.go:172","msg":"trace[1103933705] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"130.085746ms","start":"2025-12-06T09:51:51.681502Z","end":"2025-12-06T09:51:51.811588Z","steps":["trace[1103933705] 'process raft request'  (duration: 130.00315ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:51:52.094848Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.990058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-521770\" limit:1 ","response":"range_response_count:1 size:4389"}
	{"level":"info","ts":"2025-12-06T09:51:52.094908Z","caller":"traceutil/trace.go:172","msg":"trace[1627469509] range","detail":"{range_begin:/registry/minions/no-preload-521770; range_end:; response_count:1; response_revision:448; }","duration":"104.059803ms","start":"2025-12-06T09:51:51.990833Z","end":"2025-12-06T09:51:52.094893Z","steps":["trace[1627469509] 'agreement among raft nodes before linearized reading'  (duration: 34.285608ms)","trace[1627469509] 'range keys from in-memory index tree'  (duration: 69.617359ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:51:52.094965Z","caller":"traceutil/trace.go:172","msg":"trace[567712453] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"144.142981ms","start":"2025-12-06T09:51:51.950792Z","end":"2025-12-06T09:51:52.094935Z","steps":["trace[567712453] 'process raft request'  (duration: 74.368721ms)","trace[567712453] 'compare'  (duration: 69.596527ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:51:52.544584Z","caller":"traceutil/trace.go:172","msg":"trace[918871192] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:467; }","duration":"190.799366ms","start":"2025-12-06T09:51:52.353746Z","end":"2025-12-06T09:51:52.544545Z","steps":["trace[918871192] 'read index received'  (duration: 190.790304ms)","trace[918871192] 'applied index is now lower than readState.Index'  (duration: 7.565µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:51:52.544663Z","caller":"traceutil/trace.go:172","msg":"trace[63025793] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"247.166043ms","start":"2025-12-06T09:51:52.297483Z","end":"2025-12-06T09:51:52.544649Z","steps":["trace[63025793] 'process raft request'  (duration: 247.048493ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:51:52.544691Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.933252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:51:52.544722Z","caller":"traceutil/trace.go:172","msg":"trace[749796159] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:451; }","duration":"190.980791ms","start":"2025-12-06T09:51:52.353732Z","end":"2025-12-06T09:51:52.544712Z","steps":["trace[749796159] 'agreement among raft nodes before linearized reading'  (duration: 190.892425ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:52:06 up  2:34,  0 user,  load average: 5.25, 2.91, 3.25
	Linux no-preload-521770 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [82884774b5d9d5c5d073de4e4d18da1d1cf6662c6762b9f9828af6289b350f60] <==
	I1206 09:51:40.834737       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:51:40.835160       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:51:40.835305       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:51:40.835331       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:51:40.835354       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:51:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:51:41.231674       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:51:41.231817       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:51:41.231848       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:51:41.232026       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:51:41.533561       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:51:41.533600       1 metrics.go:72] Registering metrics
	I1206 09:51:41.533678       1 controller.go:711] "Syncing nftables rules"
	I1206 09:51:51.132561       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:51:51.132623       1 main.go:301] handling current node
	I1206 09:52:01.134012       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:52:01.134083       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a9191c4fe80adc19889ed3bf4757d05b6341e0884b37d0c9583b0d20ab3bdb46] <==
	I1206 09:51:28.444426       1 policy_source.go:248] refreshing policies
	E1206 09:51:28.480339       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1206 09:51:28.527359       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:51:28.531671       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1206 09:51:28.531953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:51:28.544143       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:51:28.615968       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:51:29.439140       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1206 09:51:29.780493       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:51:29.780521       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:51:30.479211       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:51:30.513366       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:51:30.636585       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:51:30.642358       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1206 09:51:30.643423       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:51:30.649380       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:51:31.368755       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:51:31.771267       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:51:31.782690       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:51:31.791978       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:51:36.871941       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:51:36.876346       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:51:36.920915       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:51:37.369379       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1206 09:52:04.868751       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:43958: use of closed network connection
	
	
	==> kube-controller-manager [ec112f259b7851ac282c5d8280bbb4715eb2a9212c8d60f538a0d5a84a5728ff] <==
	I1206 09:51:36.176202       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.176372       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:51:36.178132       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177018       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177028       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177044       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177044       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177056       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177063       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177066       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177080       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177086       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.177020       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.179468       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:51:36.179948       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-521770"
	I1206 09:51:36.180028       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:51:36.180872       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.186643       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-521770" podCIDRs=["10.244.0.0/24"]
	I1206 09:51:36.188083       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:51:36.189880       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.277949       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:36.277975       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:51:36.277980       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:51:36.288441       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:56.183509       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [902d582064408006f00acd920a5f8609433471d1a2993f3f2da722bc3af43673] <==
	I1206 09:51:37.791280       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:51:37.850292       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:51:37.950392       1 shared_informer.go:377] "Caches are synced"
	I1206 09:51:37.950429       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:51:37.950519       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:51:38.007270       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:51:38.007346       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:51:38.018439       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:51:38.019994       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:51:38.020069       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:51:38.029058       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:51:38.029082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:51:38.029113       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:51:38.029118       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:51:38.029255       1 config.go:200] "Starting service config controller"
	I1206 09:51:38.029267       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:51:38.029590       1 config.go:309] "Starting node config controller"
	I1206 09:51:38.029851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:51:38.030134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:51:38.129723       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:51:38.129773       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:51:38.131160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1cc3102b1aaf3791de9196e00b897cf51fe09588db76ba9c5b54b9d73e4dd48e] <==
	E1206 09:51:29.408394       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:51:29.409256       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1206 09:51:29.409285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1206 09:51:29.409993       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1206 09:51:29.502012       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:51:29.502951       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1206 09:51:29.519158       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:51:29.520097       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1206 09:51:29.684570       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1206 09:51:29.685597       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1206 09:51:29.697855       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:51:29.698890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1206 09:51:29.717107       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1206 09:51:29.717991       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1206 09:51:29.805623       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:51:29.806530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1206 09:51:29.859622       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1206 09:51:29.860513       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1206 09:51:29.917194       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:51:29.918184       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:51:29.920156       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:51:29.921087       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1206 09:51:29.982581       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1206 09:51:29.983702       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1206 09:51:32.486080       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:51:37 no-preload-521770 kubelet[2212]: I1206 09:51:37.488877    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds26c\" (UniqueName: \"kubernetes.io/projected/6fd87fa0-c550-4070-86fc-32b4938f35da-kube-api-access-ds26c\") pod \"kindnet-2w8b5\" (UID: \"6fd87fa0-c550-4070-86fc-32b4938f35da\") " pod="kube-system/kindnet-2w8b5"
	Dec 06 09:51:37 no-preload-521770 kubelet[2212]: I1206 09:51:37.488901    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd87fa0-c550-4070-86fc-32b4938f35da-xtables-lock\") pod \"kindnet-2w8b5\" (UID: \"6fd87fa0-c550-4070-86fc-32b4938f35da\") " pod="kube-system/kindnet-2w8b5"
	Dec 06 09:51:37 no-preload-521770 kubelet[2212]: I1206 09:51:37.488924    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4a78bfd-8025-45f5-94fa-116ef311de94-lib-modules\") pod \"kube-proxy-t7vrx\" (UID: \"e4a78bfd-8025-45f5-94fa-116ef311de94\") " pod="kube-system/kube-proxy-t7vrx"
	Dec 06 09:51:37 no-preload-521770 kubelet[2212]: I1206 09:51:37.488945    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4a78bfd-8025-45f5-94fa-116ef311de94-xtables-lock\") pod \"kube-proxy-t7vrx\" (UID: \"e4a78bfd-8025-45f5-94fa-116ef311de94\") " pod="kube-system/kube-proxy-t7vrx"
	Dec 06 09:51:37 no-preload-521770 kubelet[2212]: I1206 09:51:37.488974    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rlm\" (UniqueName: \"kubernetes.io/projected/e4a78bfd-8025-45f5-94fa-116ef311de94-kube-api-access-h6rlm\") pod \"kube-proxy-t7vrx\" (UID: \"e4a78bfd-8025-45f5-94fa-116ef311de94\") " pod="kube-system/kube-proxy-t7vrx"
	Dec 06 09:51:38 no-preload-521770 kubelet[2212]: I1206 09:51:38.713230    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-t7vrx" podStartSLOduration=1.713210226 podStartE2EDuration="1.713210226s" podCreationTimestamp="2025-12-06 09:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:51:38.712747497 +0000 UTC m=+7.151881087" watchObservedRunningTime="2025-12-06 09:51:38.713210226 +0000 UTC m=+7.152343798"
	Dec 06 09:51:40 no-preload-521770 kubelet[2212]: I1206 09:51:40.723229    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-2w8b5" podStartSLOduration=0.869491292 podStartE2EDuration="3.723207603s" podCreationTimestamp="2025-12-06 09:51:37 +0000 UTC" firstStartedPulling="2025-12-06 09:51:37.701245004 +0000 UTC m=+6.140378586" lastFinishedPulling="2025-12-06 09:51:40.554961316 +0000 UTC m=+8.994094897" observedRunningTime="2025-12-06 09:51:40.72279528 +0000 UTC m=+9.161928862" watchObservedRunningTime="2025-12-06 09:51:40.723207603 +0000 UTC m=+9.162341193"
	Dec 06 09:51:40 no-preload-521770 kubelet[2212]: E1206 09:51:40.899154    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-521770" containerName="kube-controller-manager"
	Dec 06 09:51:41 no-preload-521770 kubelet[2212]: E1206 09:51:41.033293    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-521770" containerName="etcd"
	Dec 06 09:51:41 no-preload-521770 kubelet[2212]: E1206 09:51:41.118844    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-521770" containerName="kube-scheduler"
	Dec 06 09:51:45 no-preload-521770 kubelet[2212]: E1206 09:51:45.293653    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-521770" containerName="kube-apiserver"
	Dec 06 09:51:50 no-preload-521770 kubelet[2212]: E1206 09:51:50.903424    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-521770" containerName="kube-controller-manager"
	Dec 06 09:51:51 no-preload-521770 kubelet[2212]: E1206 09:51:51.123009    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-521770" containerName="kube-scheduler"
	Dec 06 09:51:51 no-preload-521770 kubelet[2212]: E1206 09:51:51.153915    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-521770" containerName="etcd"
	Dec 06 09:51:51 no-preload-521770 kubelet[2212]: I1206 09:51:51.676275    2212 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 06 09:51:51 no-preload-521770 kubelet[2212]: I1206 09:51:51.995143    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8d7204c-9d11-4944-bc37-a5788a67aaab-config-volume\") pod \"coredns-7d764666f9-mhwh5\" (UID: \"a8d7204c-9d11-4944-bc37-a5788a67aaab\") " pod="kube-system/coredns-7d764666f9-mhwh5"
	Dec 06 09:51:51 no-preload-521770 kubelet[2212]: I1206 09:51:51.995189    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txq5h\" (UniqueName: \"kubernetes.io/projected/a8d7204c-9d11-4944-bc37-a5788a67aaab-kube-api-access-txq5h\") pod \"coredns-7d764666f9-mhwh5\" (UID: \"a8d7204c-9d11-4944-bc37-a5788a67aaab\") " pod="kube-system/coredns-7d764666f9-mhwh5"
	Dec 06 09:51:51 no-preload-521770 kubelet[2212]: I1206 09:51:51.995225    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6be872af-41f0-4aae-adf9-40313b511c3c-tmp\") pod \"storage-provisioner\" (UID: \"6be872af-41f0-4aae-adf9-40313b511c3c\") " pod="kube-system/storage-provisioner"
	Dec 06 09:51:51 no-preload-521770 kubelet[2212]: I1206 09:51:51.995289    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbrb\" (UniqueName: \"kubernetes.io/projected/6be872af-41f0-4aae-adf9-40313b511c3c-kube-api-access-thbrb\") pod \"storage-provisioner\" (UID: \"6be872af-41f0-4aae-adf9-40313b511c3c\") " pod="kube-system/storage-provisioner"
	Dec 06 09:51:52 no-preload-521770 kubelet[2212]: E1206 09:51:52.738328    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mhwh5" containerName="coredns"
	Dec 06 09:51:52 no-preload-521770 kubelet[2212]: I1206 09:51:52.747814    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.747799046 podStartE2EDuration="14.747799046s" podCreationTimestamp="2025-12-06 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:51:52.74766629 +0000 UTC m=+21.186799881" watchObservedRunningTime="2025-12-06 09:51:52.747799046 +0000 UTC m=+21.186932638"
	Dec 06 09:51:52 no-preload-521770 kubelet[2212]: I1206 09:51:52.757023    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mhwh5" podStartSLOduration=15.757010928 podStartE2EDuration="15.757010928s" podCreationTimestamp="2025-12-06 09:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:51:52.756865153 +0000 UTC m=+21.195998765" watchObservedRunningTime="2025-12-06 09:51:52.757010928 +0000 UTC m=+21.196144519"
	Dec 06 09:51:53 no-preload-521770 kubelet[2212]: E1206 09:51:53.739977    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mhwh5" containerName="coredns"
	Dec 06 09:51:54 no-preload-521770 kubelet[2212]: E1206 09:51:54.741992    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mhwh5" containerName="coredns"
	Dec 06 09:51:54 no-preload-521770 kubelet[2212]: I1206 09:51:54.812046    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2tg\" (UniqueName: \"kubernetes.io/projected/a011c5ce-2ff8-4279-bed5-cf9ec25a1eb0-kube-api-access-wz2tg\") pod \"busybox\" (UID: \"a011c5ce-2ff8-4279-bed5-cf9ec25a1eb0\") " pod="default/busybox"
	
	
	==> storage-provisioner [a5c3482879fde54c6fc7d2d2797b8852b874959cc42ea00f8d04b4590d42c96e] <==
	I1206 09:51:52.616448       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:51:52.628339       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:51:52.628400       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:51:52.630874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:51:52.637269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:51:52.637434       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:51:52.637868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61a34a7f-5161-40bf-8cdb-f26ed1163acf", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-521770_222a55ec-25d8-46ea-982b-5130a648856d became leader
	I1206 09:51:52.637907       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-521770_222a55ec-25d8-46ea-982b-5130a648856d!
	W1206 09:51:52.641921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:51:52.645853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:51:52.738329       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-521770_222a55ec-25d8-46ea-982b-5130a648856d!
	W1206 09:51:54.648751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:51:54.652526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:51:56.656318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:51:56.660066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:51:58.662746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:51:58.666351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:00.670124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:00.675621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:02.679556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:02.685426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:04.689093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:04.695146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:06.699291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:06.703160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521770 -n no-preload-521770
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-521770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.983109ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:52:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-641599
helpers_test.go:243: (dbg) docker inspect newest-cni-641599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9",
	        "Created": "2025-12-06T09:52:17.019711231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 779831,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:17.060075085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/hostname",
	        "HostsPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/hosts",
	        "LogPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9-json.log",
	        "Name": "/newest-cni-641599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-641599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-641599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9",
	                "LowerDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-641599",
	                "Source": "/var/lib/docker/volumes/newest-cni-641599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-641599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-641599",
	                "name.minikube.sigs.k8s.io": "newest-cni-641599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6744ebc4208f6ae0e51d5f7eeb4ad488ca2a16110a3cd423bd61ad2e584c1371",
	            "SandboxKey": "/var/run/docker/netns/6744ebc4208f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-641599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50ff6f7233794e663169427b7cb259f6e1696c2d99c914cb9ccec4f1b26d87f1",
	                    "EndpointID": "86ec805fb74267e83503e654e120485dc98e521314ec6e3d4a5b4e31e615004f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "26:25:1a:f9:6a:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-641599",
	                        "1412f254e7b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-641599 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-641599 logs -n 25: (1.053217287s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │                     │
	│ stop    │ -p old-k8s-version-507108 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-507108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p cert-expiration-669264                                                                                                                                                                                                                            │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ image   │ old-k8s-version-507108 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ pause   │ -p old-k8s-version-507108 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p kubernetes-upgrade-581224                                                                                                                                                                                                                         │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-920129                                                                                                                                                                                                                      │ disable-driver-mounts-920129 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p no-preload-521770 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p stopped-upgrade-031481                                                                                                                                                                                                                            │ stopped-upgrade-031481       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:52:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:52:24.230296  782026 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:52:24.230422  782026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:24.230432  782026 out.go:374] Setting ErrFile to fd 2...
	I1206 09:52:24.230439  782026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:24.230661  782026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:52:24.231206  782026 out.go:368] Setting JSON to false
	I1206 09:52:24.232660  782026 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9288,"bootTime":1765005456,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:52:24.232745  782026 start.go:143] virtualization: kvm guest
	I1206 09:52:24.234621  782026 out.go:179] * [no-preload-521770] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:52:24.235951  782026 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:52:24.235969  782026 notify.go:221] Checking for updates...
	I1206 09:52:24.238001  782026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:52:24.239277  782026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:24.240424  782026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:52:24.241537  782026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:52:24.243281  782026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:52:24.245035  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:24.245892  782026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:52:24.276543  782026 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:52:24.276704  782026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:24.353259  782026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:52:24.340425815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:52:24.353400  782026 docker.go:319] overlay module found
	I1206 09:52:24.356378  782026 out.go:179] * Using the docker driver based on existing profile
	I1206 09:52:24.357384  782026 start.go:309] selected driver: docker
	I1206 09:52:24.357403  782026 start.go:927] validating driver "docker" against &{Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:24.357556  782026 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:52:24.358245  782026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:24.428901  782026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:52:24.419008447 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:52:24.429187  782026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:24.429223  782026 cni.go:84] Creating CNI manager for ""
	I1206 09:52:24.429316  782026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:24.429384  782026 start.go:353] cluster config:
	{Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:24.431915  782026 out.go:179] * Starting "no-preload-521770" primary control-plane node in "no-preload-521770" cluster
	I1206 09:52:24.432866  782026 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:52:24.433846  782026 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:52:24.434780  782026 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:52:24.434892  782026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:52:24.434902  782026 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/config.json ...
	I1206 09:52:24.435051  782026 cache.go:107] acquiring lock: {Name:mk3f028e80f8ac87cdcd24320d70e36a894791c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435140  782026 cache.go:107] acquiring lock: {Name:mkdc523156a072e4947d577065578e91a9732b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435195  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1206 09:52:24.435150  782026 cache.go:107] acquiring lock: {Name:mke4ba1139ae959d606dd38112efde7d4d448b97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435205  782026 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 72.054µs
	I1206 09:52:24.435222  782026 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 09:52:24.435195  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 09:52:24.435241  782026 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 196.366µs
	I1206 09:52:24.435210  782026 cache.go:107] acquiring lock: {Name:mkd3b5a28f8041fde0d80c5102632df37b913591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435260  782026 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435276  782026 cache.go:107] acquiring lock: {Name:mk715c193fee45ce0be781bde9149a4d7c68db76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435277  782026 cache.go:107] acquiring lock: {Name:mkacf44d4c7d284d9b31511b6f07c1d37c06e59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435307  782026 cache.go:107] acquiring lock: {Name:mk06fdc2189bb8fbdd9f705d1a497d61567fd9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435321  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 09:52:24.435319  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 09:52:24.435328  782026 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 55.543µs
	I1206 09:52:24.435337  782026 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 09:52:24.435334  782026 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 245.346µs
	I1206 09:52:24.435326  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 09:52:24.435346  782026 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 09:52:24.435347  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 09:52:24.435347  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 09:52:24.435357  782026 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 151.015µs
	I1206 09:52:24.435350  782026 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 76.006µs
	I1206 09:52:24.435367  782026 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435369  782026 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435049  782026 cache.go:107] acquiring lock: {Name:mke865bc2a308b5226070dc1deef9b7218b9996f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435428  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 09:52:24.435435  782026 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 401.873µs
	I1206 09:52:24.435442  782026 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 09:52:24.435445  782026 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 56.266µs
	I1206 09:52:24.435479  782026 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435539  782026 cache.go:87] Successfully saved all images to host disk.
	I1206 09:52:24.458045  782026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:52:24.458074  782026 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:52:24.458091  782026 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:52:24.458128  782026 start.go:360] acquireMachinesLock for no-preload-521770: {Name:mkf85c9fe05269c67d1e37d10022df9548bf23d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.458195  782026 start.go:364] duration metric: took 47.288µs to acquireMachinesLock for "no-preload-521770"
	I1206 09:52:24.458214  782026 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:52:24.458221  782026 fix.go:54] fixHost starting: 
	I1206 09:52:24.458538  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:24.478477  782026 fix.go:112] recreateIfNeeded on no-preload-521770: state=Stopped err=<nil>
	W1206 09:52:24.478528  782026 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 09:52:22.733773  771291 node_ready.go:57] node "default-k8s-diff-port-759696" has "Ready":"False" status (will retry)
	I1206 09:52:23.232972  771291 node_ready.go:49] node "default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.233002  771291 node_ready.go:38] duration metric: took 11.002657942s for node "default-k8s-diff-port-759696" to be "Ready" ...
	I1206 09:52:23.233017  771291 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:23.233074  771291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:23.246060  771291 api_server.go:72] duration metric: took 11.380999717s to wait for apiserver process to appear ...
	I1206 09:52:23.246087  771291 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:23.246110  771291 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1206 09:52:23.250298  771291 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1206 09:52:23.251303  771291 api_server.go:141] control plane version: v1.34.2
	I1206 09:52:23.251332  771291 api_server.go:131] duration metric: took 5.237123ms to wait for apiserver health ...
	I1206 09:52:23.251343  771291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:23.255051  771291 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:23.255095  771291 system_pods.go:61] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:23.255118  771291 system_pods.go:61] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.255131  771291 system_pods.go:61] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.255144  771291 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.255151  771291 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.255160  771291 system_pods.go:61] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.255167  771291 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.255177  771291 system_pods.go:61] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:23.255191  771291 system_pods.go:74] duration metric: took 3.838741ms to wait for pod list to return data ...
	I1206 09:52:23.255204  771291 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:23.257429  771291 default_sa.go:45] found service account: "default"
	I1206 09:52:23.257448  771291 default_sa.go:55] duration metric: took 2.236469ms for default service account to be created ...
	I1206 09:52:23.257484  771291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:52:23.260094  771291 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:23.260118  771291 system_pods.go:89] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:23.260123  771291 system_pods.go:89] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.260172  771291 system_pods.go:89] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.260176  771291 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.260180  771291 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.260187  771291 system_pods.go:89] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.260190  771291 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.260198  771291 system_pods.go:89] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:23.260226  771291 retry.go:31] will retry after 301.255841ms: missing components: kube-dns
	I1206 09:52:23.564931  771291 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:23.564969  771291 system_pods.go:89] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Running
	I1206 09:52:23.564978  771291 system_pods.go:89] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.564985  771291 system_pods.go:89] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.564990  771291 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.564997  771291 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.565002  771291 system_pods.go:89] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.565007  771291 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.565012  771291 system_pods.go:89] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Running
	I1206 09:52:23.565023  771291 system_pods.go:126] duration metric: took 307.529453ms to wait for k8s-apps to be running ...
	I1206 09:52:23.565037  771291 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:52:23.565093  771291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:23.578827  771291 system_svc.go:56] duration metric: took 13.778342ms WaitForService to wait for kubelet
	I1206 09:52:23.578859  771291 kubeadm.go:587] duration metric: took 11.713805961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:23.578882  771291 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:23.581992  771291 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:23.582044  771291 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:23.582067  771291 node_conditions.go:105] duration metric: took 3.178425ms to run NodePressure ...
	I1206 09:52:23.582093  771291 start.go:242] waiting for startup goroutines ...
	I1206 09:52:23.582106  771291 start.go:247] waiting for cluster config update ...
	I1206 09:52:23.582126  771291 start.go:256] writing updated cluster config ...
	I1206 09:52:23.582452  771291 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:23.588357  771291 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:23.664775  771291 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.669577  771291 pod_ready.go:94] pod "coredns-66bc5c9577-gpnjq" is "Ready"
	I1206 09:52:23.669603  771291 pod_ready.go:86] duration metric: took 4.791126ms for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.671822  771291 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.675792  771291 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.675811  771291 pod_ready.go:86] duration metric: took 3.966323ms for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.677683  771291 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.681360  771291 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.681380  771291 pod_ready.go:86] duration metric: took 3.676297ms for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.683330  771291 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.993645  771291 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.993670  771291 pod_ready.go:86] duration metric: took 310.321581ms for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.194283  771291 pod_ready.go:83] waiting for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.594004  771291 pod_ready.go:94] pod "kube-proxy-jstq5" is "Ready"
	I1206 09:52:24.594047  771291 pod_ready.go:86] duration metric: took 399.738837ms for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.795328  771291 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:25.193288  771291 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:25.193321  771291 pod_ready.go:86] duration metric: took 397.96695ms for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:25.193336  771291 pod_ready.go:40] duration metric: took 1.604949342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:25.245818  771291 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:52:25.247685  771291 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759696" cluster and "default" namespace by default
	W1206 09:52:22.907887  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	W1206 09:52:24.909512  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	I1206 09:52:28.881377  778743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:52:28.881427  778743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:52:28.881601  778743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:52:28.881695  778743 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:52:28.881749  778743 kubeadm.go:319] OS: Linux
	I1206 09:52:28.881841  778743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:52:28.881928  778743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:52:28.882000  778743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:52:28.882049  778743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:52:28.882132  778743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:52:28.882210  778743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:52:28.882277  778743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:52:28.882345  778743 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:52:28.882436  778743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:52:28.882576  778743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:52:28.882704  778743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:52:28.882775  778743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:52:28.884431  778743 out.go:252]   - Generating certificates and keys ...
	I1206 09:52:28.884529  778743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:52:28.884636  778743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:52:28.884748  778743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:52:28.884841  778743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:52:28.884943  778743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:52:28.885024  778743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:52:28.885100  778743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:52:28.885280  778743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-641599] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:52:28.885358  778743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:52:28.885556  778743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-641599] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:52:28.885624  778743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:52:28.885735  778743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:52:28.885800  778743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:52:28.885872  778743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:52:28.885933  778743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:52:28.885985  778743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:52:28.886031  778743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:52:28.886092  778743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:52:28.886144  778743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:52:28.886235  778743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:52:28.886303  778743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:52:28.887431  778743 out.go:252]   - Booting up control plane ...
	I1206 09:52:28.887524  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:52:28.887617  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:52:28.887698  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:52:28.887847  778743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:52:28.887990  778743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:52:28.888099  778743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:52:28.888175  778743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:52:28.888229  778743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:52:28.888348  778743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:52:28.888468  778743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:52:28.888545  778743 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.352933ms
	I1206 09:52:28.888691  778743 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:52:28.888800  778743 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:52:28.888930  778743 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:52:28.889053  778743 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:52:28.889201  778743 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005929457s
	I1206 09:52:28.889315  778743 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.86815486s
	I1206 09:52:28.889418  778743 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502032967s
	I1206 09:52:28.889585  778743 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:52:28.889700  778743 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:52:28.889754  778743 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:52:28.889915  778743 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-641599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:52:28.889974  778743 kubeadm.go:319] [bootstrap-token] Using token: w8ash3.bz5dwngp2dkzla91
	I1206 09:52:28.891194  778743 out.go:252]   - Configuring RBAC rules ...
	I1206 09:52:28.891287  778743 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:52:28.891362  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:52:28.891577  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:52:28.891727  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:52:28.891834  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:52:28.891911  778743 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:52:28.892085  778743 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:52:28.892135  778743 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:52:28.892202  778743 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:52:28.892211  778743 kubeadm.go:319] 
	I1206 09:52:28.892294  778743 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:52:28.892309  778743 kubeadm.go:319] 
	I1206 09:52:28.892420  778743 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:52:28.892429  778743 kubeadm.go:319] 
	I1206 09:52:28.892488  778743 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:52:28.892587  778743 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:52:28.892635  778743 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:52:28.892639  778743 kubeadm.go:319] 
	I1206 09:52:28.892698  778743 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:52:28.892710  778743 kubeadm.go:319] 
	I1206 09:52:28.892782  778743 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:52:28.892791  778743 kubeadm.go:319] 
	I1206 09:52:28.892852  778743 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:52:28.892929  778743 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:52:28.892989  778743 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:52:28.892995  778743 kubeadm.go:319] 
	I1206 09:52:28.893066  778743 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:52:28.893153  778743 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:52:28.893165  778743 kubeadm.go:319] 
	I1206 09:52:28.893263  778743 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w8ash3.bz5dwngp2dkzla91 \
	I1206 09:52:28.893386  778743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:52:28.893421  778743 kubeadm.go:319] 	--control-plane 
	I1206 09:52:28.893430  778743 kubeadm.go:319] 
	I1206 09:52:28.893539  778743 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:52:28.893551  778743 kubeadm.go:319] 
	I1206 09:52:28.893668  778743 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w8ash3.bz5dwngp2dkzla91 \
	I1206 09:52:28.893821  778743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:52:28.893837  778743 cni.go:84] Creating CNI manager for ""
	I1206 09:52:28.893846  778743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:28.895223  778743 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:52:24.480389  782026 out.go:252] * Restarting existing docker container for "no-preload-521770" ...
	I1206 09:52:24.480470  782026 cli_runner.go:164] Run: docker start no-preload-521770
	I1206 09:52:24.745004  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:24.764399  782026 kic.go:430] container "no-preload-521770" state is running.
	I1206 09:52:24.764844  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:24.783728  782026 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/config.json ...
	I1206 09:52:24.784054  782026 machine.go:94] provisionDockerMachine start ...
	I1206 09:52:24.784143  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:24.805518  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:24.805838  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:24.805858  782026 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:52:24.806622  782026 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32898->127.0.0.1:33211: read: connection reset by peer
	I1206 09:52:27.951364  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-521770
	
	I1206 09:52:27.951393  782026 ubuntu.go:182] provisioning hostname "no-preload-521770"
	I1206 09:52:27.951451  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:27.971393  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:27.971652  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:27.971668  782026 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-521770 && echo "no-preload-521770" | sudo tee /etc/hostname
	I1206 09:52:28.116596  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-521770
	
	I1206 09:52:28.116743  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.145696  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:28.146039  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:28.146078  782026 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521770/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:52:28.277194  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:52:28.277232  782026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:52:28.277279  782026 ubuntu.go:190] setting up certificates
	I1206 09:52:28.277296  782026 provision.go:84] configureAuth start
	I1206 09:52:28.277367  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:28.297825  782026 provision.go:143] copyHostCerts
	I1206 09:52:28.297892  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:52:28.297904  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:52:28.297965  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:52:28.298076  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:52:28.298087  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:52:28.298116  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:52:28.298173  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:52:28.298181  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:52:28.298204  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:52:28.298263  782026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.no-preload-521770 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-521770]
	I1206 09:52:28.338699  782026 provision.go:177] copyRemoteCerts
	I1206 09:52:28.338753  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:52:28.338786  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.359141  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:28.454191  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:52:28.473624  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:52:28.491855  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:52:28.510074  782026 provision.go:87] duration metric: took 232.761897ms to configureAuth
	I1206 09:52:28.510100  782026 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:52:28.510273  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:28.510386  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.530157  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:28.530466  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:28.530510  782026 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:52:28.858502  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:52:28.858532  782026 machine.go:97] duration metric: took 4.074459793s to provisionDockerMachine
	I1206 09:52:28.858548  782026 start.go:293] postStartSetup for "no-preload-521770" (driver="docker")
	I1206 09:52:28.858563  782026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:52:28.858636  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:52:28.858705  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.878915  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:28.979184  782026 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:52:28.983787  782026 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:52:28.983819  782026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:52:28.983832  782026 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:52:28.983889  782026 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:52:28.983970  782026 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:52:28.984063  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:52:28.992522  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:52:29.014583  782026 start.go:296] duration metric: took 156.016922ms for postStartSetup
	I1206 09:52:29.014683  782026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:52:29.014736  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.034344  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.129648  782026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:52:29.135314  782026 fix.go:56] duration metric: took 4.677087094s for fixHost
	I1206 09:52:29.135342  782026 start.go:83] releasing machines lock for "no-preload-521770", held for 4.677136228s
	I1206 09:52:29.135410  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:29.162339  782026 ssh_runner.go:195] Run: cat /version.json
	I1206 09:52:29.162396  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.162642  782026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:52:29.162728  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.185520  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.186727  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.349551  782026 ssh_runner.go:195] Run: systemctl --version
	I1206 09:52:29.358158  782026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:52:29.394015  782026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:52:29.398851  782026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:52:29.398921  782026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:52:29.407814  782026 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:52:29.407837  782026 start.go:496] detecting cgroup driver to use...
	I1206 09:52:29.407872  782026 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:52:29.407930  782026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:52:29.423937  782026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:52:29.438135  782026 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:52:29.438206  782026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:52:29.455656  782026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:52:29.469279  782026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:52:29.552150  782026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:52:29.653579  782026 docker.go:234] disabling docker service ...
	I1206 09:52:29.653654  782026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:52:29.673000  782026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:52:29.690524  782026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:52:29.786324  782026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:52:29.872262  782026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:52:29.885199  782026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:52:29.900924  782026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:52:29.900982  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.910821  782026 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:52:29.910889  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.919823  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.929149  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.938262  782026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:52:29.946657  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.955647  782026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.964498  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.973324  782026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:52:29.980560  782026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:52:29.987564  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:30.070090  782026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:52:30.217146  782026 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:52:30.217230  782026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:52:30.222028  782026 start.go:564] Will wait 60s for crictl version
	I1206 09:52:30.222111  782026 ssh_runner.go:195] Run: which crictl
	I1206 09:52:30.226345  782026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:52:30.254418  782026 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:52:30.254555  782026 ssh_runner.go:195] Run: crio --version
	I1206 09:52:30.293642  782026 ssh_runner.go:195] Run: crio --version
	I1206 09:52:30.325962  782026 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	W1206 09:52:27.407536  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	I1206 09:52:29.407975  771042 node_ready.go:49] node "embed-certs-997968" is "Ready"
	I1206 09:52:29.408008  771042 node_ready.go:38] duration metric: took 11.003669422s for node "embed-certs-997968" to be "Ready" ...
	I1206 09:52:29.408027  771042 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:29.408075  771042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:29.420440  771042 api_server.go:72] duration metric: took 11.320537531s to wait for apiserver process to appear ...
	I1206 09:52:29.420477  771042 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:29.420500  771042 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:52:29.425249  771042 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 09:52:29.426270  771042 api_server.go:141] control plane version: v1.34.2
	I1206 09:52:29.426306  771042 api_server.go:131] duration metric: took 5.819336ms to wait for apiserver health ...
	I1206 09:52:29.426317  771042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:29.429954  771042 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:29.429999  771042 system_pods.go:61] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.430021  771042 system_pods.go:61] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.430033  771042 system_pods.go:61] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.430039  771042 system_pods.go:61] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.430044  771042 system_pods.go:61] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.430050  771042 system_pods.go:61] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.430054  771042 system_pods.go:61] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.430065  771042 system_pods.go:61] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.430084  771042 system_pods.go:74] duration metric: took 3.759477ms to wait for pod list to return data ...
	I1206 09:52:29.430098  771042 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:29.432724  771042 default_sa.go:45] found service account: "default"
	I1206 09:52:29.432745  771042 default_sa.go:55] duration metric: took 2.638778ms for default service account to be created ...
	I1206 09:52:29.432752  771042 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:52:29.435843  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:29.435879  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.435898  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.435906  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.435918  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.435928  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.435934  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.435940  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.435950  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.435977  771042 retry.go:31] will retry after 248.263392ms: missing components: kube-dns
	I1206 09:52:29.688876  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:29.688919  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.688928  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.688936  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.688941  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.688948  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.688953  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.688958  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.688965  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.689163  771042 retry.go:31] will retry after 320.128103ms: missing components: kube-dns
	I1206 09:52:30.018114  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:30.018163  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:30.018172  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:30.018184  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:30.018189  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:30.018203  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:30.018209  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:30.018214  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:30.018220  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:30.018241  771042 retry.go:31] will retry after 435.909841ms: missing components: kube-dns
	I1206 09:52:30.459320  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:30.459353  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:30.459361  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:30.459367  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:30.459371  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:30.459375  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:30.459378  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:30.459382  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:30.459390  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:30.459410  771042 retry.go:31] will retry after 560.042985ms: missing components: kube-dns
	I1206 09:52:30.327261  782026 cli_runner.go:164] Run: docker network inspect no-preload-521770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:52:30.345176  782026 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:52:30.349559  782026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:30.361149  782026 kubeadm.go:884] updating cluster {Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:52:30.361304  782026 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:52:30.361352  782026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:52:30.394137  782026 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:52:30.394157  782026 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:52:30.394165  782026 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:52:30.394264  782026 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-521770 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:52:30.394337  782026 ssh_runner.go:195] Run: crio config
	I1206 09:52:30.445676  782026 cni.go:84] Creating CNI manager for ""
	I1206 09:52:30.445701  782026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:30.445721  782026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:52:30.445751  782026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521770 NodeName:no-preload-521770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:52:30.445918  782026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521770"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:52:30.446005  782026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:52:30.455026  782026 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:52:30.455103  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:52:30.465415  782026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:52:30.479189  782026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:52:30.492386  782026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 09:52:30.505928  782026 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:52:30.510294  782026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:30.520891  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:30.616935  782026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:30.643233  782026 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770 for IP: 192.168.94.2
	I1206 09:52:30.643253  782026 certs.go:195] generating shared ca certs ...
	I1206 09:52:30.643270  782026 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:30.643417  782026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:52:30.643475  782026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:52:30.643487  782026 certs.go:257] generating profile certs ...
	I1206 09:52:30.643572  782026 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/client.key
	I1206 09:52:30.643626  782026 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.key.1f412e4b
	I1206 09:52:30.643661  782026 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.key
	I1206 09:52:30.643767  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:52:30.643797  782026 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:52:30.643807  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:52:30.643835  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:52:30.643858  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:52:30.643882  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:52:30.643923  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:52:30.644530  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:52:30.663921  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:52:30.683993  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:52:30.703729  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:52:30.728228  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:52:30.750676  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:52:30.770880  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:52:30.791391  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:52:30.810670  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:52:30.829739  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:52:30.848932  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:52:30.867931  782026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:52:30.880912  782026 ssh_runner.go:195] Run: openssl version
	I1206 09:52:30.887389  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.895646  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:52:30.903571  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.907605  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.907658  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.943625  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:52:30.951924  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.959877  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:52:30.967663  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.971326  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.971370  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:52:31.007142  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:52:31.015641  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.024412  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:52:31.032674  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.036926  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.036985  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.082560  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:52:31.090907  782026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:52:31.095147  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:52:31.132949  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:52:31.180275  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:52:31.227799  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:52:31.286217  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:52:31.341886  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:52:31.383881  782026 kubeadm.go:401] StartCluster: {Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:31.383997  782026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:52:31.384066  782026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:52:31.416718  782026 cri.go:89] found id: "9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0"
	I1206 09:52:31.416742  782026 cri.go:89] found id: "4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5"
	I1206 09:52:31.416748  782026 cri.go:89] found id: "1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692"
	I1206 09:52:31.416753  782026 cri.go:89] found id: "585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068"
	I1206 09:52:31.416758  782026 cri.go:89] found id: ""
	I1206 09:52:31.416811  782026 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:52:31.431324  782026 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:52:31Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:52:31.431427  782026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:52:31.441532  782026 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:52:31.441554  782026 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:52:31.441600  782026 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:52:31.451039  782026 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:52:31.451930  782026 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-521770" does not appear in /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:31.452547  782026 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-499330/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-521770" cluster setting kubeconfig missing "no-preload-521770" context setting]
	I1206 09:52:31.453822  782026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.456004  782026 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:52:31.465352  782026 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1206 09:52:31.465394  782026 kubeadm.go:602] duration metric: took 23.833546ms to restartPrimaryControlPlane
	I1206 09:52:31.465406  782026 kubeadm.go:403] duration metric: took 81.54039ms to StartCluster
	I1206 09:52:31.465427  782026 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.465520  782026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:31.468310  782026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.468628  782026 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:52:31.468678  782026 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:52:31.468786  782026 addons.go:70] Setting storage-provisioner=true in profile "no-preload-521770"
	I1206 09:52:31.468806  782026 addons.go:239] Setting addon storage-provisioner=true in "no-preload-521770"
	W1206 09:52:31.468818  782026 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:52:31.468847  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.468862  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:31.468867  782026 addons.go:70] Setting dashboard=true in profile "no-preload-521770"
	I1206 09:52:31.468883  782026 addons.go:70] Setting default-storageclass=true in profile "no-preload-521770"
	I1206 09:52:31.468914  782026 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521770"
	I1206 09:52:31.468892  782026 addons.go:239] Setting addon dashboard=true in "no-preload-521770"
	W1206 09:52:31.469002  782026 addons.go:248] addon dashboard should already be in state true
	I1206 09:52:31.469030  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.469241  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.469323  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.469518  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.471225  782026 out.go:179] * Verifying Kubernetes components...
	I1206 09:52:31.474638  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:31.493357  782026 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:52:31.493374  782026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:31.494645  782026 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:31.494664  782026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:52:31.494695  782026 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:52:28.896370  778743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:52:28.900738  778743 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1206 09:52:28.900757  778743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:52:28.915819  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:52:29.131536  778743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:52:29.131597  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:29.131646  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-641599 minikube.k8s.io/updated_at=2025_12_06T09_52_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=newest-cni-641599 minikube.k8s.io/primary=true
	I1206 09:52:29.144444  778743 ops.go:34] apiserver oom_adj: -16
	I1206 09:52:29.227957  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:29.728684  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:30.228791  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:30.728381  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.228935  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.728685  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.024277  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:31.024322  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Running
	I1206 09:52:31.024332  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:31.024337  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:31.024343  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:31.024349  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:31.024355  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:31.024360  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:31.024370  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Running
	I1206 09:52:31.024380  771042 system_pods.go:126] duration metric: took 1.591621605s to wait for k8s-apps to be running ...
	I1206 09:52:31.024393  771042 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:52:31.024440  771042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:31.038119  771042 system_svc.go:56] duration metric: took 13.715424ms WaitForService to wait for kubelet
	I1206 09:52:31.038150  771042 kubeadm.go:587] duration metric: took 12.938253131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:31.038185  771042 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:31.041178  771042 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:31.041209  771042 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:31.041227  771042 node_conditions.go:105] duration metric: took 3.034732ms to run NodePressure ...
	I1206 09:52:31.041253  771042 start.go:242] waiting for startup goroutines ...
	I1206 09:52:31.041267  771042 start.go:247] waiting for cluster config update ...
	I1206 09:52:31.041282  771042 start.go:256] writing updated cluster config ...
	I1206 09:52:31.041621  771042 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:31.045304  771042 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:31.049252  771042 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.053658  771042 pod_ready.go:94] pod "coredns-66bc5c9577-kw8nl" is "Ready"
	I1206 09:52:31.053679  771042 pod_ready.go:86] duration metric: took 4.401998ms for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.055651  771042 pod_ready.go:83] waiting for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.059893  771042 pod_ready.go:94] pod "etcd-embed-certs-997968" is "Ready"
	I1206 09:52:31.059916  771042 pod_ready.go:86] duration metric: took 4.242092ms for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.061937  771042 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.065821  771042 pod_ready.go:94] pod "kube-apiserver-embed-certs-997968" is "Ready"
	I1206 09:52:31.065838  771042 pod_ready.go:86] duration metric: took 3.881454ms for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.067804  771042 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.450772  771042 pod_ready.go:94] pod "kube-controller-manager-embed-certs-997968" is "Ready"
	I1206 09:52:31.450805  771042 pod_ready.go:86] duration metric: took 382.979811ms for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.650858  771042 pod_ready.go:83] waiting for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.050643  771042 pod_ready.go:94] pod "kube-proxy-m2zpr" is "Ready"
	I1206 09:52:32.050679  771042 pod_ready.go:86] duration metric: took 399.791241ms for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.251448  771042 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.651295  771042 pod_ready.go:94] pod "kube-scheduler-embed-certs-997968" is "Ready"
	I1206 09:52:32.651333  771042 pod_ready.go:86] duration metric: took 399.807696ms for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.651350  771042 pod_ready.go:40] duration metric: took 1.606005846s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:32.715347  771042 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:52:32.717209  771042 out.go:179] * Done! kubectl is now configured to use "embed-certs-997968" cluster and "default" namespace by default
	I1206 09:52:31.494726  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.496408  782026 addons.go:239] Setting addon default-storageclass=true in "no-preload-521770"
	W1206 09:52:31.496441  782026 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:52:31.496486  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.496950  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.497158  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:52:31.497175  782026 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:52:31.497220  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.525895  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.525995  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.531373  782026 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:31.531497  782026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:52:31.531646  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.558891  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.617244  782026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:31.631670  782026 node_ready.go:35] waiting up to 6m0s for node "no-preload-521770" to be "Ready" ...
	I1206 09:52:31.637913  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:31.640598  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:52:31.640621  782026 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:52:31.655712  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:52:31.655739  782026 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:52:31.665130  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:31.670828  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:52:31.670852  782026 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:52:31.684031  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:52:31.684058  782026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:52:31.699299  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:52:31.699328  782026 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:52:31.715252  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:52:31.715293  782026 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:52:31.731123  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:52:31.731152  782026 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:52:31.746870  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:52:31.746901  782026 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:52:31.764204  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:52:31.764245  782026 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:52:31.778466  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:52:32.765763  782026 node_ready.go:49] node "no-preload-521770" is "Ready"
	I1206 09:52:32.765807  782026 node_ready.go:38] duration metric: took 1.13410262s for node "no-preload-521770" to be "Ready" ...
	I1206 09:52:32.765825  782026 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:32.765878  782026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:33.377502  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.739521443s)
	I1206 09:52:33.377555  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.712396791s)
	I1206 09:52:33.377698  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.599185729s)
	I1206 09:52:33.377732  782026 api_server.go:72] duration metric: took 1.909066219s to wait for apiserver process to appear ...
	I1206 09:52:33.377745  782026 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:33.377766  782026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:52:33.379186  782026 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-521770 addons enable metrics-server
	
	I1206 09:52:33.382540  782026 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:52:33.382566  782026 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:52:33.384754  782026 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:52:32.228887  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:32.728657  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.228203  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.728879  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.831071  778743 kubeadm.go:1114] duration metric: took 4.699527608s to wait for elevateKubeSystemPrivileges
	I1206 09:52:33.831115  778743 kubeadm.go:403] duration metric: took 12.354833253s to StartCluster
	I1206 09:52:33.831139  778743 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:33.831222  778743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:33.834301  778743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:33.835103  778743 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:52:33.835251  778743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:52:33.835593  778743 config.go:182] Loaded profile config "newest-cni-641599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:33.836022  778743 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:52:33.836151  778743 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-641599"
	I1206 09:52:33.836171  778743 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-641599"
	I1206 09:52:33.836215  778743 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:52:33.836254  778743 addons.go:70] Setting default-storageclass=true in profile "newest-cni-641599"
	I1206 09:52:33.836283  778743 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-641599"
	I1206 09:52:33.836675  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.836819  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.836996  778743 out.go:179] * Verifying Kubernetes components...
	I1206 09:52:33.838578  778743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:33.865836  778743 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:33.385762  782026 addons.go:530] duration metric: took 1.917092445s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:52:33.877868  782026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:52:33.892187  782026 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:52:33.892234  782026 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:52:33.866996  778743 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:33.867020  778743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:52:33.867084  778743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-641599
	I1206 09:52:33.867713  778743 addons.go:239] Setting addon default-storageclass=true in "newest-cni-641599"
	I1206 09:52:33.867760  778743 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:52:33.868266  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.901090  778743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/newest-cni-641599/id_rsa Username:docker}
	I1206 09:52:33.902303  778743 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:33.902332  778743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:52:33.902406  778743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-641599
	I1206 09:52:33.931188  778743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/newest-cni-641599/id_rsa Username:docker}
	I1206 09:52:33.956142  778743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:52:34.000905  778743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:34.019354  778743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:34.066170  778743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:34.207346  778743 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:52:34.209353  778743 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:34.209427  778743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:34.384953  778743 api_server.go:72] duration metric: took 549.802737ms to wait for apiserver process to appear ...
	I1206 09:52:34.384982  778743 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:34.385002  778743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:52:34.390257  778743 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 09:52:34.391261  778743 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:52:34.391294  778743 api_server.go:131] duration metric: took 6.303663ms to wait for apiserver health ...
	I1206 09:52:34.391304  778743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:34.393369  778743 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:52:34.394798  778743 addons.go:530] duration metric: took 559.141528ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:52:34.395190  778743 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:34.395225  778743 system_pods.go:61] "coredns-7d764666f9-8njm9" [97429e74-14c2-47b6-aecd-8b863a997474] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:52:34.395242  778743 system_pods.go:61] "etcd-newest-cni-641599" [ca0d2519-e026-4dee-a3fb-ce7df13ee8fc] Running
	I1206 09:52:34.395255  778743 system_pods.go:61] "kindnet-kv2gc" [0f27b79f-29eb-4e3e-9a65-fbc2529e4f09] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:52:34.395262  778743 system_pods.go:61] "kube-apiserver-newest-cni-641599" [40559cd7-889e-49dd-9f65-0b5e9a543dc2] Running
	I1206 09:52:34.395272  778743 system_pods.go:61] "kube-controller-manager-newest-cni-641599" [a609bb41-7a15-4452-9140-a79c35a026c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:52:34.395294  778743 system_pods.go:61] "kube-proxy-fv54r" [b74c4162-c9cd-43a6-9a4a-2162b2899489] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:52:34.395304  778743 system_pods.go:61] "kube-scheduler-newest-cni-641599" [81daab83-11f5-44cb-982c-212001fe43a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:52:34.395309  778743 system_pods.go:61] "storage-provisioner" [4de61ac3-6403-4c30-9cea-246b6f8bc458] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:52:34.395319  778743 system_pods.go:74] duration metric: took 4.008824ms to wait for pod list to return data ...
	I1206 09:52:34.395326  778743 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:34.397781  778743 default_sa.go:45] found service account: "default"
	I1206 09:52:34.397804  778743 default_sa.go:55] duration metric: took 2.472195ms for default service account to be created ...
	I1206 09:52:34.397819  778743 kubeadm.go:587] duration metric: took 562.671247ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:52:34.397836  778743 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:34.442287  778743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:34.442320  778743 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:34.442334  778743 node_conditions.go:105] duration metric: took 44.493558ms to run NodePressure ...
	I1206 09:52:34.442347  778743 start.go:242] waiting for startup goroutines ...
	I1206 09:52:34.714855  778743 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-641599" context rescaled to 1 replicas
	I1206 09:52:34.714897  778743 start.go:247] waiting for cluster config update ...
	I1206 09:52:34.714911  778743 start.go:256] writing updated cluster config ...
	I1206 09:52:34.715173  778743 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:34.769780  778743 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:52:34.772739  778743 out.go:179] * Done! kubectl is now configured to use "newest-cni-641599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.141854795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.142526813Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=78ca6ce7-060c-41e4-81f0-92f9344f561c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.14526282Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.146280757Z" level=info msg="Ran pod sandbox 67a252de42d7ee96d552832ab9a800155fe6754cf3bbed1c27fde0cde1bd10c5 with infra container: kube-system/kube-proxy-fv54r/POD" id=78ca6ce7-060c-41e4-81f0-92f9344f561c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.1476173Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e754a35b-9df2-43c8-a9ca-36ceba499a8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.14801761Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ee42565d-7bd1-4673-b917-0c27b134e3a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.14985164Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.150911395Z" level=info msg="Ran pod sandbox b9eee87d79bd84a9ccdafffc4caa971be89e58038618abee182c7b9d09b181e6 with infra container: kube-system/kindnet-kv2gc/POD" id=e754a35b-9df2-43c8-a9ca-36ceba499a8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.151798574Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4eba534a-b6c9-4671-b680-d01d1f035584 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.153356581Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a956c7ab-7e18-458b-b6d5-a8e2541b8a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.154767937Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9149bea2-03ec-4486-b6bf-50ab737b4f34 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.158419026Z" level=info msg="Creating container: kube-system/kube-proxy-fv54r/kube-proxy" id=2ccf4ae0-ddf9-461c-bfc8-c53c7cc0af89 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.159354605Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.159501921Z" level=info msg="Creating container: kube-system/kindnet-kv2gc/kindnet-cni" id=59b8c970-ab6a-4463-8948-fe0958ae7b94 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.159601971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.168158332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.169045107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.169167439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.169639829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.207062338Z" level=info msg="Created container e3383393ce777c2e71d2153c58621e9a9de0d6e0729f439212c64f1f05e1056f: kube-system/kindnet-kv2gc/kindnet-cni" id=59b8c970-ab6a-4463-8948-fe0958ae7b94 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.207931086Z" level=info msg="Starting container: e3383393ce777c2e71d2153c58621e9a9de0d6e0729f439212c64f1f05e1056f" id=652b6abf-2c97-49de-870d-a4d6aa8b25d0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.210331115Z" level=info msg="Created container c8465ea852838ac50c31afce3fe0503344490ef5f88304b366cfe10d7211ac88: kube-system/kube-proxy-fv54r/kube-proxy" id=2ccf4ae0-ddf9-461c-bfc8-c53c7cc0af89 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.211077956Z" level=info msg="Starting container: c8465ea852838ac50c31afce3fe0503344490ef5f88304b366cfe10d7211ac88" id=d808e0d6-65fd-401e-a3ce-fb9339babc6c name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.212784265Z" level=info msg="Started container" PID=1596 containerID=e3383393ce777c2e71d2153c58621e9a9de0d6e0729f439212c64f1f05e1056f description=kube-system/kindnet-kv2gc/kindnet-cni id=652b6abf-2c97-49de-870d-a4d6aa8b25d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9eee87d79bd84a9ccdafffc4caa971be89e58038618abee182c7b9d09b181e6
	Dec 06 09:52:34 newest-cni-641599 crio[777]: time="2025-12-06T09:52:34.21633988Z" level=info msg="Started container" PID=1597 containerID=c8465ea852838ac50c31afce3fe0503344490ef5f88304b366cfe10d7211ac88 description=kube-system/kube-proxy-fv54r/kube-proxy id=d808e0d6-65fd-401e-a3ce-fb9339babc6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=67a252de42d7ee96d552832ab9a800155fe6754cf3bbed1c27fde0cde1bd10c5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e3383393ce777       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   b9eee87d79bd8       kindnet-kv2gc                               kube-system
	c8465ea852838       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   67a252de42d7e       kube-proxy-fv54r                            kube-system
	10054e16bf818       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   9af28861b4ec6       kube-controller-manager-newest-cni-641599   kube-system
	c75e16c486197       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   39a0edd902057       kube-scheduler-newest-cni-641599            kube-system
	979c139c80cac       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   ac7ebded54574       kube-apiserver-newest-cni-641599            kube-system
	0a051c103e846       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   d35588402b210       etcd-newest-cni-641599                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-641599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-641599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=newest-cni-641599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-641599
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:52:28 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:52:28 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:52:28 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 06 Dec 2025 09:52:28 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-641599
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                46a78757-c37e-4e88-b08d-a951bd452cce
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-641599                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-kv2gc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-641599             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-641599    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-fv54r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-641599             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-641599 event: Registered Node newest-cni-641599 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [0a051c103e846eb270892a87acc55ddbd05e968358bee2abafe8bcac8a1c2ff0] <==
	{"level":"warn","ts":"2025-12-06T09:52:25.160665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.169518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.176674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.183491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.189645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.196748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.205484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.213289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.220643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.233650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.241526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.247728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.254928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.261487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.270175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.278370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.285177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.292203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.298685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.306164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.321263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.327636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.335639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.343353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:25.392044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60884","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:52:36 up  2:34,  0 user,  load average: 4.56, 2.95, 3.25
	Linux newest-cni-641599 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3383393ce777c2e71d2153c58621e9a9de0d6e0729f439212c64f1f05e1056f] <==
	I1206 09:52:34.450028       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:52:34.450323       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:52:34.450523       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:52:34.450543       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:52:34.450567       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:52:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:52:34.649968       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:52:34.650418       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:52:34.650480       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:52:34.650631       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:52:35.048438       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:52:35.048586       1 metrics.go:72] Registering metrics
	I1206 09:52:35.048752       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [979c139c80cac1ab8568f86e283a2f4c258a751a0e3d8ba766e2352e3b1aeef2] <==
	I1206 09:52:25.912777       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:25.912804       1 policy_source.go:248] refreshing policies
	E1206 09:52:25.934902       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1206 09:52:25.981025       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:52:25.983771       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:25.984020       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1206 09:52:25.987706       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:26.081580       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:52:26.785109       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1206 09:52:26.788865       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:52:26.788884       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:52:27.214033       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:52:27.254386       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:52:27.388295       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:52:27.393873       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1206 09:52:27.394811       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:52:27.398610       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:52:27.810264       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:52:28.282358       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:52:28.291025       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:52:28.298941       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:52:33.511685       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:52:33.614146       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:33.617991       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:33.812397       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [10054e16bf8180f219df99af829761e1e2b0fb35efbcaf7f60ccf271ecedd66b] <==
	I1206 09:52:32.615450       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.615490       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.615534       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.615777       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.615866       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.615974       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616010       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616067       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616159       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616286       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.615437       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616336       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616370       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616074       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616013       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.616793       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.617770       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.619412       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.626705       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:32.628739       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.634333       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-641599" podCIDRs=["10.42.0.0/24"]
	I1206 09:52:32.714928       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.714951       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:52:32.714958       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:52:32.727300       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c8465ea852838ac50c31afce3fe0503344490ef5f88304b366cfe10d7211ac88] <==
	I1206 09:52:34.265553       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:52:34.334147       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:34.435070       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:34.435103       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 09:52:34.435238       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:52:34.454288       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:52:34.454347       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:52:34.459799       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:52:34.460228       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:52:34.460271       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:52:34.461631       1 config.go:200] "Starting service config controller"
	I1206 09:52:34.461660       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:52:34.461692       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:52:34.461699       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:52:34.461714       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:52:34.461719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:52:34.461853       1 config.go:309] "Starting node config controller"
	I1206 09:52:34.461864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:52:34.461871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:52:34.561867       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:52:34.561867       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:52:34.561918       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c75e16c4861971c42e0074cee35fd94cf0f6c825bd671c230595a69f9e339460] <==
	E1206 09:52:25.837430       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:52:25.837472       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1206 09:52:25.837514       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1206 09:52:25.837566       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1206 09:52:26.643806       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:26.644846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1206 09:52:26.740540       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:26.741784       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1206 09:52:26.744887       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:26.745770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:52:26.765004       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1206 09:52:26.765852       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1206 09:52:26.830501       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1206 09:52:26.831604       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:52:26.879733       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:52:26.880689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1206 09:52:26.982314       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:52:26.983331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:52:26.985316       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1206 09:52:26.986385       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1206 09:52:27.033758       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1206 09:52:27.034871       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1206 09:52:27.036910       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:27.037896       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I1206 09:52:29.231370       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:52:29 newest-cni-641599 kubelet[1309]: E1206 09:52:29.154205    1309 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-641599\" already exists" pod="kube-system/kube-controller-manager-newest-cni-641599"
	Dec 06 09:52:29 newest-cni-641599 kubelet[1309]: E1206 09:52:29.154289    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-641599" containerName="kube-controller-manager"
	Dec 06 09:52:29 newest-cni-641599 kubelet[1309]: I1206 09:52:29.171127    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-641599" podStartSLOduration=1.171103822 podStartE2EDuration="1.171103822s" podCreationTimestamp="2025-12-06 09:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:29.159938221 +0000 UTC m=+1.120948030" watchObservedRunningTime="2025-12-06 09:52:29.171103822 +0000 UTC m=+1.132113633"
	Dec 06 09:52:29 newest-cni-641599 kubelet[1309]: I1206 09:52:29.171535    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-641599" podStartSLOduration=1.171521466 podStartE2EDuration="1.171521466s" podCreationTimestamp="2025-12-06 09:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:29.171494244 +0000 UTC m=+1.132504047" watchObservedRunningTime="2025-12-06 09:52:29.171521466 +0000 UTC m=+1.132531275"
	Dec 06 09:52:29 newest-cni-641599 kubelet[1309]: I1206 09:52:29.185095    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-641599" podStartSLOduration=1.185075768 podStartE2EDuration="1.185075768s" podCreationTimestamp="2025-12-06 09:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:29.184666569 +0000 UTC m=+1.145676378" watchObservedRunningTime="2025-12-06 09:52:29.185075768 +0000 UTC m=+1.146085580"
	Dec 06 09:52:29 newest-cni-641599 kubelet[1309]: I1206 09:52:29.718789    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-641599" podStartSLOduration=1.718772931 podStartE2EDuration="1.718772931s" podCreationTimestamp="2025-12-06 09:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:29.197198731 +0000 UTC m=+1.158208541" watchObservedRunningTime="2025-12-06 09:52:29.718772931 +0000 UTC m=+1.679782738"
	Dec 06 09:52:30 newest-cni-641599 kubelet[1309]: E1206 09:52:30.146120    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-641599" containerName="kube-controller-manager"
	Dec 06 09:52:30 newest-cni-641599 kubelet[1309]: E1206 09:52:30.146198    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-641599" containerName="kube-apiserver"
	Dec 06 09:52:30 newest-cni-641599 kubelet[1309]: E1206 09:52:30.146228    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-641599" containerName="etcd"
	Dec 06 09:52:30 newest-cni-641599 kubelet[1309]: E1206 09:52:30.146336    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-641599" containerName="kube-scheduler"
	Dec 06 09:52:31 newest-cni-641599 kubelet[1309]: E1206 09:52:31.148571    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-641599" containerName="kube-controller-manager"
	Dec 06 09:52:31 newest-cni-641599 kubelet[1309]: E1206 09:52:31.148921    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-641599" containerName="kube-scheduler"
	Dec 06 09:52:32 newest-cni-641599 kubelet[1309]: I1206 09:52:32.658640    1309 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 06 09:52:32 newest-cni-641599 kubelet[1309]: I1206 09:52:32.659757    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.856047    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctg6\" (UniqueName: \"kubernetes.io/projected/b74c4162-c9cd-43a6-9a4a-2162b2899489-kube-api-access-rctg6\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.857088    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b74c4162-c9cd-43a6-9a4a-2162b2899489-xtables-lock\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.857180    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqb6\" (UniqueName: \"kubernetes.io/projected/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-kube-api-access-xhqb6\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.857285    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74c4162-c9cd-43a6-9a4a-2162b2899489-lib-modules\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.857395    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-cni-cfg\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.857446    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b74c4162-c9cd-43a6-9a4a-2162b2899489-kube-proxy\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.857485    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-xtables-lock\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:33 newest-cni-641599 kubelet[1309]: I1206 09:52:33.857538    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-lib-modules\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:35 newest-cni-641599 kubelet[1309]: I1206 09:52:35.179072    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-fv54r" podStartSLOduration=2.179054396 podStartE2EDuration="2.179054396s" podCreationTimestamp="2025-12-06 09:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:35.178782704 +0000 UTC m=+7.139792523" watchObservedRunningTime="2025-12-06 09:52:35.179054396 +0000 UTC m=+7.140064205"
	Dec 06 09:52:35 newest-cni-641599 kubelet[1309]: I1206 09:52:35.191226    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-kv2gc" podStartSLOduration=2.191202755 podStartE2EDuration="2.191202755s" podCreationTimestamp="2025-12-06 09:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:35.191170005 +0000 UTC m=+7.152179813" watchObservedRunningTime="2025-12-06 09:52:35.191202755 +0000 UTC m=+7.152212566"
	Dec 06 09:52:35 newest-cni-641599 kubelet[1309]: E1206 09:52:35.438545    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-641599" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-641599 -n newest-cni-641599
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-641599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-8njm9 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner: exit status 1 (65.18672ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-8njm9" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.928941ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:52:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-759696 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-759696 describe deploy/metrics-server -n kube-system: exit status 1 (65.53423ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-759696 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-759696
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-759696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87",
	        "Created": "2025-12-06T09:51:52.674641004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 772672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:51:52.716355826Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/hosts",
	        "LogPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87-json.log",
	        "Name": "/default-k8s-diff-port-759696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-759696:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-759696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87",
	                "LowerDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-759696",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-759696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-759696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-759696",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-759696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fc85ec24905a821de70e8806796e0935a8d62455f773ad783c25d77103673810",
	            "SandboxKey": "/var/run/docker/netns/fc85ec24905a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-759696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8eafe0b310a8d3a7cc2c2f8b223b86754d5d6f80cb6837e1258939016171b84",
	                    "EndpointID": "e03904475710e7904c71e2c5321321c7a3f2f4c5a2b5a4f9d28218f84fb2d09f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "02:6d:e7:21:17:d2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-759696",
	                        "7e15a5997079"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759696 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-759696 logs -n 25: (1.149363316s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-507108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │                     │
	│ stop    │ -p old-k8s-version-507108 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-507108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:50 UTC │
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p cert-expiration-669264                                                                                                                                                                                                                            │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ image   │ old-k8s-version-507108 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ pause   │ -p old-k8s-version-507108 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p kubernetes-upgrade-581224                                                                                                                                                                                                                         │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-920129                                                                                                                                                                                                                      │ disable-driver-mounts-920129 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p no-preload-521770 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p stopped-upgrade-031481                                                                                                                                                                                                                            │ stopped-upgrade-031481       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:52:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:52:24.230296  782026 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:52:24.230422  782026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:24.230432  782026 out.go:374] Setting ErrFile to fd 2...
	I1206 09:52:24.230439  782026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:24.230661  782026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:52:24.231206  782026 out.go:368] Setting JSON to false
	I1206 09:52:24.232660  782026 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9288,"bootTime":1765005456,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:52:24.232745  782026 start.go:143] virtualization: kvm guest
	I1206 09:52:24.234621  782026 out.go:179] * [no-preload-521770] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:52:24.235951  782026 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:52:24.235969  782026 notify.go:221] Checking for updates...
	I1206 09:52:24.238001  782026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:52:24.239277  782026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:24.240424  782026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:52:24.241537  782026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:52:24.243281  782026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:52:24.245035  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:24.245892  782026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:52:24.276543  782026 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:52:24.276704  782026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:24.353259  782026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:52:24.340425815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:52:24.353400  782026 docker.go:319] overlay module found
	I1206 09:52:24.356378  782026 out.go:179] * Using the docker driver based on existing profile
	I1206 09:52:24.357384  782026 start.go:309] selected driver: docker
	I1206 09:52:24.357403  782026 start.go:927] validating driver "docker" against &{Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:24.357556  782026 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:52:24.358245  782026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:24.428901  782026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:52:24.419008447 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:52:24.429187  782026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:24.429223  782026 cni.go:84] Creating CNI manager for ""
	I1206 09:52:24.429316  782026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:24.429384  782026 start.go:353] cluster config:
	{Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:24.431915  782026 out.go:179] * Starting "no-preload-521770" primary control-plane node in "no-preload-521770" cluster
	I1206 09:52:24.432866  782026 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:52:24.433846  782026 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:52:24.434780  782026 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:52:24.434892  782026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:52:24.434902  782026 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/config.json ...
	I1206 09:52:24.435051  782026 cache.go:107] acquiring lock: {Name:mk3f028e80f8ac87cdcd24320d70e36a894791c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435140  782026 cache.go:107] acquiring lock: {Name:mkdc523156a072e4947d577065578e91a9732b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435195  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1206 09:52:24.435150  782026 cache.go:107] acquiring lock: {Name:mke4ba1139ae959d606dd38112efde7d4d448b97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435205  782026 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 72.054µs
	I1206 09:52:24.435222  782026 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 09:52:24.435195  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 09:52:24.435241  782026 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 196.366µs
	I1206 09:52:24.435210  782026 cache.go:107] acquiring lock: {Name:mkd3b5a28f8041fde0d80c5102632df37b913591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435260  782026 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435276  782026 cache.go:107] acquiring lock: {Name:mk715c193fee45ce0be781bde9149a4d7c68db76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435277  782026 cache.go:107] acquiring lock: {Name:mkacf44d4c7d284d9b31511b6f07c1d37c06e59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435307  782026 cache.go:107] acquiring lock: {Name:mk06fdc2189bb8fbdd9f705d1a497d61567fd9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435321  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 09:52:24.435319  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 09:52:24.435328  782026 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 55.543µs
	I1206 09:52:24.435337  782026 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 09:52:24.435334  782026 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 245.346µs
	I1206 09:52:24.435326  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 09:52:24.435346  782026 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 09:52:24.435347  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 09:52:24.435347  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 09:52:24.435357  782026 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 151.015µs
	I1206 09:52:24.435350  782026 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 76.006µs
	I1206 09:52:24.435367  782026 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435369  782026 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435049  782026 cache.go:107] acquiring lock: {Name:mke865bc2a308b5226070dc1deef9b7218b9996f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435428  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 09:52:24.435435  782026 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 401.873µs
	I1206 09:52:24.435442  782026 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 09:52:24.435445  782026 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 56.266µs
	I1206 09:52:24.435479  782026 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435539  782026 cache.go:87] Successfully saved all images to host disk.
	I1206 09:52:24.458045  782026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:52:24.458074  782026 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:52:24.458091  782026 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:52:24.458128  782026 start.go:360] acquireMachinesLock for no-preload-521770: {Name:mkf85c9fe05269c67d1e37d10022df9548bf23d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.458195  782026 start.go:364] duration metric: took 47.288µs to acquireMachinesLock for "no-preload-521770"
	I1206 09:52:24.458214  782026 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:52:24.458221  782026 fix.go:54] fixHost starting: 
	I1206 09:52:24.458538  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:24.478477  782026 fix.go:112] recreateIfNeeded on no-preload-521770: state=Stopped err=<nil>
	W1206 09:52:24.478528  782026 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 09:52:22.733773  771291 node_ready.go:57] node "default-k8s-diff-port-759696" has "Ready":"False" status (will retry)
	I1206 09:52:23.232972  771291 node_ready.go:49] node "default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.233002  771291 node_ready.go:38] duration metric: took 11.002657942s for node "default-k8s-diff-port-759696" to be "Ready" ...
	I1206 09:52:23.233017  771291 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:23.233074  771291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:23.246060  771291 api_server.go:72] duration metric: took 11.380999717s to wait for apiserver process to appear ...
	I1206 09:52:23.246087  771291 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:23.246110  771291 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1206 09:52:23.250298  771291 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1206 09:52:23.251303  771291 api_server.go:141] control plane version: v1.34.2
	I1206 09:52:23.251332  771291 api_server.go:131] duration metric: took 5.237123ms to wait for apiserver health ...
	I1206 09:52:23.251343  771291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:23.255051  771291 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:23.255095  771291 system_pods.go:61] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:23.255118  771291 system_pods.go:61] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.255131  771291 system_pods.go:61] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.255144  771291 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.255151  771291 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.255160  771291 system_pods.go:61] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.255167  771291 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.255177  771291 system_pods.go:61] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:23.255191  771291 system_pods.go:74] duration metric: took 3.838741ms to wait for pod list to return data ...
	I1206 09:52:23.255204  771291 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:23.257429  771291 default_sa.go:45] found service account: "default"
	I1206 09:52:23.257448  771291 default_sa.go:55] duration metric: took 2.236469ms for default service account to be created ...
	I1206 09:52:23.257484  771291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:52:23.260094  771291 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:23.260118  771291 system_pods.go:89] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:23.260123  771291 system_pods.go:89] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.260172  771291 system_pods.go:89] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.260176  771291 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.260180  771291 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.260187  771291 system_pods.go:89] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.260190  771291 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.260198  771291 system_pods.go:89] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:23.260226  771291 retry.go:31] will retry after 301.255841ms: missing components: kube-dns
	I1206 09:52:23.564931  771291 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:23.564969  771291 system_pods.go:89] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Running
	I1206 09:52:23.564978  771291 system_pods.go:89] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.564985  771291 system_pods.go:89] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.564990  771291 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.564997  771291 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.565002  771291 system_pods.go:89] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.565007  771291 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.565012  771291 system_pods.go:89] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Running
	I1206 09:52:23.565023  771291 system_pods.go:126] duration metric: took 307.529453ms to wait for k8s-apps to be running ...
	I1206 09:52:23.565037  771291 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:52:23.565093  771291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:23.578827  771291 system_svc.go:56] duration metric: took 13.778342ms WaitForService to wait for kubelet
	I1206 09:52:23.578859  771291 kubeadm.go:587] duration metric: took 11.713805961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:23.578882  771291 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:23.581992  771291 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:23.582044  771291 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:23.582067  771291 node_conditions.go:105] duration metric: took 3.178425ms to run NodePressure ...
	I1206 09:52:23.582093  771291 start.go:242] waiting for startup goroutines ...
	I1206 09:52:23.582106  771291 start.go:247] waiting for cluster config update ...
	I1206 09:52:23.582126  771291 start.go:256] writing updated cluster config ...
	I1206 09:52:23.582452  771291 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:23.588357  771291 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:23.664775  771291 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.669577  771291 pod_ready.go:94] pod "coredns-66bc5c9577-gpnjq" is "Ready"
	I1206 09:52:23.669603  771291 pod_ready.go:86] duration metric: took 4.791126ms for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.671822  771291 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.675792  771291 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.675811  771291 pod_ready.go:86] duration metric: took 3.966323ms for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.677683  771291 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.681360  771291 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.681380  771291 pod_ready.go:86] duration metric: took 3.676297ms for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.683330  771291 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.993645  771291 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.993670  771291 pod_ready.go:86] duration metric: took 310.321581ms for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.194283  771291 pod_ready.go:83] waiting for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.594004  771291 pod_ready.go:94] pod "kube-proxy-jstq5" is "Ready"
	I1206 09:52:24.594047  771291 pod_ready.go:86] duration metric: took 399.738837ms for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.795328  771291 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:25.193288  771291 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:25.193321  771291 pod_ready.go:86] duration metric: took 397.96695ms for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:25.193336  771291 pod_ready.go:40] duration metric: took 1.604949342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:25.245818  771291 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:52:25.247685  771291 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759696" cluster and "default" namespace by default
	W1206 09:52:22.907887  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	W1206 09:52:24.909512  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	I1206 09:52:28.881377  778743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:52:28.881427  778743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:52:28.881601  778743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:52:28.881695  778743 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:52:28.881749  778743 kubeadm.go:319] OS: Linux
	I1206 09:52:28.881841  778743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:52:28.881928  778743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:52:28.882000  778743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:52:28.882049  778743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:52:28.882132  778743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:52:28.882210  778743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:52:28.882277  778743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:52:28.882345  778743 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:52:28.882436  778743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:52:28.882576  778743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:52:28.882704  778743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:52:28.882775  778743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:52:28.884431  778743 out.go:252]   - Generating certificates and keys ...
	I1206 09:52:28.884529  778743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:52:28.884636  778743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:52:28.884748  778743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:52:28.884841  778743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:52:28.884943  778743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:52:28.885024  778743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:52:28.885100  778743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:52:28.885280  778743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-641599] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:52:28.885358  778743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:52:28.885556  778743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-641599] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:52:28.885624  778743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:52:28.885735  778743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:52:28.885800  778743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:52:28.885872  778743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:52:28.885933  778743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:52:28.885985  778743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:52:28.886031  778743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:52:28.886092  778743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:52:28.886144  778743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:52:28.886235  778743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:52:28.886303  778743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:52:28.887431  778743 out.go:252]   - Booting up control plane ...
	I1206 09:52:28.887524  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:52:28.887617  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:52:28.887698  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:52:28.887847  778743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:52:28.887990  778743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:52:28.888099  778743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:52:28.888175  778743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:52:28.888229  778743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:52:28.888348  778743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:52:28.888468  778743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:52:28.888545  778743 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.352933ms
	I1206 09:52:28.888691  778743 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:52:28.888800  778743 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:52:28.888930  778743 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:52:28.889053  778743 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:52:28.889201  778743 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005929457s
	I1206 09:52:28.889315  778743 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.86815486s
	I1206 09:52:28.889418  778743 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502032967s
	I1206 09:52:28.889585  778743 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:52:28.889700  778743 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:52:28.889754  778743 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:52:28.889915  778743 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-641599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:52:28.889974  778743 kubeadm.go:319] [bootstrap-token] Using token: w8ash3.bz5dwngp2dkzla91
	I1206 09:52:28.891194  778743 out.go:252]   - Configuring RBAC rules ...
	I1206 09:52:28.891287  778743 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:52:28.891362  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:52:28.891577  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:52:28.891727  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:52:28.891834  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:52:28.891911  778743 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:52:28.892085  778743 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:52:28.892135  778743 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:52:28.892202  778743 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:52:28.892211  778743 kubeadm.go:319] 
	I1206 09:52:28.892294  778743 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:52:28.892309  778743 kubeadm.go:319] 
	I1206 09:52:28.892420  778743 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:52:28.892429  778743 kubeadm.go:319] 
	I1206 09:52:28.892488  778743 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:52:28.892587  778743 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:52:28.892635  778743 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:52:28.892639  778743 kubeadm.go:319] 
	I1206 09:52:28.892698  778743 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:52:28.892710  778743 kubeadm.go:319] 
	I1206 09:52:28.892782  778743 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:52:28.892791  778743 kubeadm.go:319] 
	I1206 09:52:28.892852  778743 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:52:28.892929  778743 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:52:28.892989  778743 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:52:28.892995  778743 kubeadm.go:319] 
	I1206 09:52:28.893066  778743 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:52:28.893153  778743 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:52:28.893165  778743 kubeadm.go:319] 
	I1206 09:52:28.893263  778743 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w8ash3.bz5dwngp2dkzla91 \
	I1206 09:52:28.893386  778743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:52:28.893421  778743 kubeadm.go:319] 	--control-plane 
	I1206 09:52:28.893430  778743 kubeadm.go:319] 
	I1206 09:52:28.893539  778743 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:52:28.893551  778743 kubeadm.go:319] 
	I1206 09:52:28.893668  778743 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w8ash3.bz5dwngp2dkzla91 \
	I1206 09:52:28.893821  778743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:52:28.893837  778743 cni.go:84] Creating CNI manager for ""
	I1206 09:52:28.893846  778743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:28.895223  778743 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:52:24.480389  782026 out.go:252] * Restarting existing docker container for "no-preload-521770" ...
	I1206 09:52:24.480470  782026 cli_runner.go:164] Run: docker start no-preload-521770
	I1206 09:52:24.745004  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:24.764399  782026 kic.go:430] container "no-preload-521770" state is running.
	I1206 09:52:24.764844  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:24.783728  782026 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/config.json ...
	I1206 09:52:24.784054  782026 machine.go:94] provisionDockerMachine start ...
	I1206 09:52:24.784143  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:24.805518  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:24.805838  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:24.805858  782026 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:52:24.806622  782026 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32898->127.0.0.1:33211: read: connection reset by peer
	I1206 09:52:27.951364  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-521770
	
	I1206 09:52:27.951393  782026 ubuntu.go:182] provisioning hostname "no-preload-521770"
	I1206 09:52:27.951451  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:27.971393  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:27.971652  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:27.971668  782026 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-521770 && echo "no-preload-521770" | sudo tee /etc/hostname
	I1206 09:52:28.116596  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-521770
	
	I1206 09:52:28.116743  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.145696  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:28.146039  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:28.146078  782026 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521770/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:52:28.277194  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:52:28.277232  782026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:52:28.277279  782026 ubuntu.go:190] setting up certificates
	I1206 09:52:28.277296  782026 provision.go:84] configureAuth start
	I1206 09:52:28.277367  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:28.297825  782026 provision.go:143] copyHostCerts
	I1206 09:52:28.297892  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:52:28.297904  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:52:28.297965  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:52:28.298076  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:52:28.298087  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:52:28.298116  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:52:28.298173  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:52:28.298181  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:52:28.298204  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:52:28.298263  782026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.no-preload-521770 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-521770]
	I1206 09:52:28.338699  782026 provision.go:177] copyRemoteCerts
	I1206 09:52:28.338753  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:52:28.338786  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.359141  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:28.454191  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:52:28.473624  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:52:28.491855  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:52:28.510074  782026 provision.go:87] duration metric: took 232.761897ms to configureAuth
	I1206 09:52:28.510100  782026 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:52:28.510273  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:28.510386  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.530157  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:28.530466  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:28.530510  782026 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:52:28.858502  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:52:28.858532  782026 machine.go:97] duration metric: took 4.074459793s to provisionDockerMachine
	I1206 09:52:28.858548  782026 start.go:293] postStartSetup for "no-preload-521770" (driver="docker")
	I1206 09:52:28.858563  782026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:52:28.858636  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:52:28.858705  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.878915  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:28.979184  782026 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:52:28.983787  782026 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:52:28.983819  782026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:52:28.983832  782026 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:52:28.983889  782026 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:52:28.983970  782026 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:52:28.984063  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:52:28.992522  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:52:29.014583  782026 start.go:296] duration metric: took 156.016922ms for postStartSetup
	I1206 09:52:29.014683  782026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:52:29.014736  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.034344  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.129648  782026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:52:29.135314  782026 fix.go:56] duration metric: took 4.677087094s for fixHost
	I1206 09:52:29.135342  782026 start.go:83] releasing machines lock for "no-preload-521770", held for 4.677136228s
	I1206 09:52:29.135410  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:29.162339  782026 ssh_runner.go:195] Run: cat /version.json
	I1206 09:52:29.162396  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.162642  782026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:52:29.162728  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.185520  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.186727  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.349551  782026 ssh_runner.go:195] Run: systemctl --version
	I1206 09:52:29.358158  782026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:52:29.394015  782026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:52:29.398851  782026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:52:29.398921  782026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:52:29.407814  782026 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:52:29.407837  782026 start.go:496] detecting cgroup driver to use...
	I1206 09:52:29.407872  782026 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:52:29.407930  782026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:52:29.423937  782026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:52:29.438135  782026 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:52:29.438206  782026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:52:29.455656  782026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:52:29.469279  782026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:52:29.552150  782026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:52:29.653579  782026 docker.go:234] disabling docker service ...
	I1206 09:52:29.653654  782026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:52:29.673000  782026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:52:29.690524  782026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:52:29.786324  782026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:52:29.872262  782026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:52:29.885199  782026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:52:29.900924  782026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:52:29.900982  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.910821  782026 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:52:29.910889  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.919823  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.929149  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.938262  782026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:52:29.946657  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.955647  782026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.964498  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.973324  782026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:52:29.980560  782026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:52:29.987564  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:30.070090  782026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:52:30.217146  782026 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:52:30.217230  782026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:52:30.222028  782026 start.go:564] Will wait 60s for crictl version
	I1206 09:52:30.222111  782026 ssh_runner.go:195] Run: which crictl
	I1206 09:52:30.226345  782026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:52:30.254418  782026 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:52:30.254555  782026 ssh_runner.go:195] Run: crio --version
	I1206 09:52:30.293642  782026 ssh_runner.go:195] Run: crio --version
	I1206 09:52:30.325962  782026 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	W1206 09:52:27.407536  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	I1206 09:52:29.407975  771042 node_ready.go:49] node "embed-certs-997968" is "Ready"
	I1206 09:52:29.408008  771042 node_ready.go:38] duration metric: took 11.003669422s for node "embed-certs-997968" to be "Ready" ...
	I1206 09:52:29.408027  771042 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:29.408075  771042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:29.420440  771042 api_server.go:72] duration metric: took 11.320537531s to wait for apiserver process to appear ...
	I1206 09:52:29.420477  771042 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:29.420500  771042 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:52:29.425249  771042 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 09:52:29.426270  771042 api_server.go:141] control plane version: v1.34.2
	I1206 09:52:29.426306  771042 api_server.go:131] duration metric: took 5.819336ms to wait for apiserver health ...
	I1206 09:52:29.426317  771042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:29.429954  771042 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:29.429999  771042 system_pods.go:61] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.430021  771042 system_pods.go:61] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.430033  771042 system_pods.go:61] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.430039  771042 system_pods.go:61] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.430044  771042 system_pods.go:61] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.430050  771042 system_pods.go:61] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.430054  771042 system_pods.go:61] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.430065  771042 system_pods.go:61] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.430084  771042 system_pods.go:74] duration metric: took 3.759477ms to wait for pod list to return data ...
	I1206 09:52:29.430098  771042 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:29.432724  771042 default_sa.go:45] found service account: "default"
	I1206 09:52:29.432745  771042 default_sa.go:55] duration metric: took 2.638778ms for default service account to be created ...
	I1206 09:52:29.432752  771042 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:52:29.435843  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:29.435879  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.435898  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.435906  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.435918  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.435928  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.435934  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.435940  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.435950  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.435977  771042 retry.go:31] will retry after 248.263392ms: missing components: kube-dns
	I1206 09:52:29.688876  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:29.688919  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.688928  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.688936  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.688941  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.688948  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.688953  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.688958  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.688965  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.689163  771042 retry.go:31] will retry after 320.128103ms: missing components: kube-dns
	I1206 09:52:30.018114  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:30.018163  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:30.018172  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:30.018184  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:30.018189  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:30.018203  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:30.018209  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:30.018214  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:30.018220  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:30.018241  771042 retry.go:31] will retry after 435.909841ms: missing components: kube-dns
	I1206 09:52:30.459320  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:30.459353  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:30.459361  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:30.459367  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:30.459371  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:30.459375  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:30.459378  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:30.459382  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:30.459390  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:30.459410  771042 retry.go:31] will retry after 560.042985ms: missing components: kube-dns
	I1206 09:52:30.327261  782026 cli_runner.go:164] Run: docker network inspect no-preload-521770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:52:30.345176  782026 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:52:30.349559  782026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:30.361149  782026 kubeadm.go:884] updating cluster {Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:52:30.361304  782026 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:52:30.361352  782026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:52:30.394137  782026 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:52:30.394157  782026 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:52:30.394165  782026 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:52:30.394264  782026 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-521770 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:52:30.394337  782026 ssh_runner.go:195] Run: crio config
	I1206 09:52:30.445676  782026 cni.go:84] Creating CNI manager for ""
	I1206 09:52:30.445701  782026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:30.445721  782026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:52:30.445751  782026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521770 NodeName:no-preload-521770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:52:30.445918  782026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521770"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:52:30.446005  782026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:52:30.455026  782026 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:52:30.455103  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:52:30.465415  782026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:52:30.479189  782026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:52:30.492386  782026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 09:52:30.505928  782026 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:52:30.510294  782026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:30.520891  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:30.616935  782026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:30.643233  782026 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770 for IP: 192.168.94.2
	I1206 09:52:30.643253  782026 certs.go:195] generating shared ca certs ...
	I1206 09:52:30.643270  782026 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:30.643417  782026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:52:30.643475  782026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:52:30.643487  782026 certs.go:257] generating profile certs ...
	I1206 09:52:30.643572  782026 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/client.key
	I1206 09:52:30.643626  782026 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.key.1f412e4b
	I1206 09:52:30.643661  782026 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.key
	I1206 09:52:30.643767  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:52:30.643797  782026 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:52:30.643807  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:52:30.643835  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:52:30.643858  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:52:30.643882  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:52:30.643923  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:52:30.644530  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:52:30.663921  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:52:30.683993  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:52:30.703729  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:52:30.728228  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:52:30.750676  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:52:30.770880  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:52:30.791391  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:52:30.810670  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:52:30.829739  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:52:30.848932  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:52:30.867931  782026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:52:30.880912  782026 ssh_runner.go:195] Run: openssl version
	I1206 09:52:30.887389  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.895646  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:52:30.903571  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.907605  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.907658  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.943625  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:52:30.951924  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.959877  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:52:30.967663  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.971326  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.971370  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:52:31.007142  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:52:31.015641  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.024412  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:52:31.032674  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.036926  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.036985  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.082560  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:52:31.090907  782026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:52:31.095147  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:52:31.132949  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:52:31.180275  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:52:31.227799  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:52:31.286217  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:52:31.341886  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:52:31.383881  782026 kubeadm.go:401] StartCluster: {Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:31.383997  782026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:52:31.384066  782026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:52:31.416718  782026 cri.go:89] found id: "9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0"
	I1206 09:52:31.416742  782026 cri.go:89] found id: "4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5"
	I1206 09:52:31.416748  782026 cri.go:89] found id: "1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692"
	I1206 09:52:31.416753  782026 cri.go:89] found id: "585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068"
	I1206 09:52:31.416758  782026 cri.go:89] found id: ""
	I1206 09:52:31.416811  782026 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:52:31.431324  782026 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:52:31Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:52:31.431427  782026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:52:31.441532  782026 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:52:31.441554  782026 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:52:31.441600  782026 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:52:31.451039  782026 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:52:31.451930  782026 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-521770" does not appear in /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:31.452547  782026 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-499330/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-521770" cluster setting kubeconfig missing "no-preload-521770" context setting]
	I1206 09:52:31.453822  782026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.456004  782026 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:52:31.465352  782026 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1206 09:52:31.465394  782026 kubeadm.go:602] duration metric: took 23.833546ms to restartPrimaryControlPlane
	I1206 09:52:31.465406  782026 kubeadm.go:403] duration metric: took 81.54039ms to StartCluster
	I1206 09:52:31.465427  782026 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.465520  782026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:31.468310  782026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.468628  782026 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:52:31.468678  782026 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:52:31.468786  782026 addons.go:70] Setting storage-provisioner=true in profile "no-preload-521770"
	I1206 09:52:31.468806  782026 addons.go:239] Setting addon storage-provisioner=true in "no-preload-521770"
	W1206 09:52:31.468818  782026 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:52:31.468847  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.468862  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:31.468867  782026 addons.go:70] Setting dashboard=true in profile "no-preload-521770"
	I1206 09:52:31.468883  782026 addons.go:70] Setting default-storageclass=true in profile "no-preload-521770"
	I1206 09:52:31.468914  782026 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521770"
	I1206 09:52:31.468892  782026 addons.go:239] Setting addon dashboard=true in "no-preload-521770"
	W1206 09:52:31.469002  782026 addons.go:248] addon dashboard should already be in state true
	I1206 09:52:31.469030  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.469241  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.469323  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.469518  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.471225  782026 out.go:179] * Verifying Kubernetes components...
	I1206 09:52:31.474638  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:31.493357  782026 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:52:31.493374  782026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:31.494645  782026 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:31.494664  782026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:52:31.494695  782026 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:52:28.896370  778743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:52:28.900738  778743 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1206 09:52:28.900757  778743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:52:28.915819  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:52:29.131536  778743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:52:29.131597  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:29.131646  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-641599 minikube.k8s.io/updated_at=2025_12_06T09_52_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=newest-cni-641599 minikube.k8s.io/primary=true
	I1206 09:52:29.144444  778743 ops.go:34] apiserver oom_adj: -16
	I1206 09:52:29.227957  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:29.728684  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:30.228791  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:30.728381  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.228935  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.728685  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.024277  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:31.024322  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Running
	I1206 09:52:31.024332  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:31.024337  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:31.024343  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:31.024349  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:31.024355  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:31.024360  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:31.024370  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Running
	I1206 09:52:31.024380  771042 system_pods.go:126] duration metric: took 1.591621605s to wait for k8s-apps to be running ...
	I1206 09:52:31.024393  771042 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:52:31.024440  771042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:31.038119  771042 system_svc.go:56] duration metric: took 13.715424ms WaitForService to wait for kubelet
	I1206 09:52:31.038150  771042 kubeadm.go:587] duration metric: took 12.938253131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:31.038185  771042 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:31.041178  771042 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:31.041209  771042 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:31.041227  771042 node_conditions.go:105] duration metric: took 3.034732ms to run NodePressure ...
	I1206 09:52:31.041253  771042 start.go:242] waiting for startup goroutines ...
	I1206 09:52:31.041267  771042 start.go:247] waiting for cluster config update ...
	I1206 09:52:31.041282  771042 start.go:256] writing updated cluster config ...
	I1206 09:52:31.041621  771042 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:31.045304  771042 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:31.049252  771042 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.053658  771042 pod_ready.go:94] pod "coredns-66bc5c9577-kw8nl" is "Ready"
	I1206 09:52:31.053679  771042 pod_ready.go:86] duration metric: took 4.401998ms for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.055651  771042 pod_ready.go:83] waiting for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.059893  771042 pod_ready.go:94] pod "etcd-embed-certs-997968" is "Ready"
	I1206 09:52:31.059916  771042 pod_ready.go:86] duration metric: took 4.242092ms for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.061937  771042 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.065821  771042 pod_ready.go:94] pod "kube-apiserver-embed-certs-997968" is "Ready"
	I1206 09:52:31.065838  771042 pod_ready.go:86] duration metric: took 3.881454ms for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.067804  771042 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.450772  771042 pod_ready.go:94] pod "kube-controller-manager-embed-certs-997968" is "Ready"
	I1206 09:52:31.450805  771042 pod_ready.go:86] duration metric: took 382.979811ms for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.650858  771042 pod_ready.go:83] waiting for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.050643  771042 pod_ready.go:94] pod "kube-proxy-m2zpr" is "Ready"
	I1206 09:52:32.050679  771042 pod_ready.go:86] duration metric: took 399.791241ms for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.251448  771042 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.651295  771042 pod_ready.go:94] pod "kube-scheduler-embed-certs-997968" is "Ready"
	I1206 09:52:32.651333  771042 pod_ready.go:86] duration metric: took 399.807696ms for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.651350  771042 pod_ready.go:40] duration metric: took 1.606005846s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:32.715347  771042 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:52:32.717209  771042 out.go:179] * Done! kubectl is now configured to use "embed-certs-997968" cluster and "default" namespace by default
	I1206 09:52:31.494726  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.496408  782026 addons.go:239] Setting addon default-storageclass=true in "no-preload-521770"
	W1206 09:52:31.496441  782026 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:52:31.496486  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.496950  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.497158  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:52:31.497175  782026 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:52:31.497220  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.525895  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.525995  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.531373  782026 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:31.531497  782026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:52:31.531646  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.558891  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.617244  782026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:31.631670  782026 node_ready.go:35] waiting up to 6m0s for node "no-preload-521770" to be "Ready" ...
	I1206 09:52:31.637913  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:31.640598  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:52:31.640621  782026 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:52:31.655712  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:52:31.655739  782026 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:52:31.665130  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:31.670828  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:52:31.670852  782026 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:52:31.684031  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:52:31.684058  782026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:52:31.699299  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:52:31.699328  782026 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:52:31.715252  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:52:31.715293  782026 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:52:31.731123  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:52:31.731152  782026 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:52:31.746870  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:52:31.746901  782026 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:52:31.764204  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:52:31.764245  782026 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:52:31.778466  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:52:32.765763  782026 node_ready.go:49] node "no-preload-521770" is "Ready"
	I1206 09:52:32.765807  782026 node_ready.go:38] duration metric: took 1.13410262s for node "no-preload-521770" to be "Ready" ...
	I1206 09:52:32.765825  782026 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:32.765878  782026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:33.377502  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.739521443s)
	I1206 09:52:33.377555  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.712396791s)
	I1206 09:52:33.377698  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.599185729s)
	I1206 09:52:33.377732  782026 api_server.go:72] duration metric: took 1.909066219s to wait for apiserver process to appear ...
	I1206 09:52:33.377745  782026 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:33.377766  782026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:52:33.379186  782026 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-521770 addons enable metrics-server
	
	I1206 09:52:33.382540  782026 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:52:33.382566  782026 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:52:33.384754  782026 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:52:32.228887  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:32.728657  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.228203  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.728879  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.831071  778743 kubeadm.go:1114] duration metric: took 4.699527608s to wait for elevateKubeSystemPrivileges
	I1206 09:52:33.831115  778743 kubeadm.go:403] duration metric: took 12.354833253s to StartCluster
	I1206 09:52:33.831139  778743 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:33.831222  778743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:33.834301  778743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:33.835103  778743 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:52:33.835251  778743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:52:33.835593  778743 config.go:182] Loaded profile config "newest-cni-641599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:33.836022  778743 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:52:33.836151  778743 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-641599"
	I1206 09:52:33.836171  778743 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-641599"
	I1206 09:52:33.836215  778743 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:52:33.836254  778743 addons.go:70] Setting default-storageclass=true in profile "newest-cni-641599"
	I1206 09:52:33.836283  778743 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-641599"
	I1206 09:52:33.836675  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.836819  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.836996  778743 out.go:179] * Verifying Kubernetes components...
	I1206 09:52:33.838578  778743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:33.865836  778743 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:33.385762  782026 addons.go:530] duration metric: took 1.917092445s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:52:33.877868  782026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:52:33.892187  782026 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:52:33.892234  782026 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:52:33.866996  778743 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:33.867020  778743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:52:33.867084  778743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-641599
	I1206 09:52:33.867713  778743 addons.go:239] Setting addon default-storageclass=true in "newest-cni-641599"
	I1206 09:52:33.867760  778743 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:52:33.868266  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.901090  778743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/newest-cni-641599/id_rsa Username:docker}
	I1206 09:52:33.902303  778743 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:33.902332  778743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:52:33.902406  778743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-641599
	I1206 09:52:33.931188  778743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/newest-cni-641599/id_rsa Username:docker}
	I1206 09:52:33.956142  778743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:52:34.000905  778743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:34.019354  778743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:34.066170  778743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:34.207346  778743 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:52:34.209353  778743 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:34.209427  778743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:34.384953  778743 api_server.go:72] duration metric: took 549.802737ms to wait for apiserver process to appear ...
	I1206 09:52:34.384982  778743 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:34.385002  778743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:52:34.390257  778743 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 09:52:34.391261  778743 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:52:34.391294  778743 api_server.go:131] duration metric: took 6.303663ms to wait for apiserver health ...
	I1206 09:52:34.391304  778743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:34.393369  778743 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:52:34.394798  778743 addons.go:530] duration metric: took 559.141528ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:52:34.395190  778743 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:34.395225  778743 system_pods.go:61] "coredns-7d764666f9-8njm9" [97429e74-14c2-47b6-aecd-8b863a997474] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:52:34.395242  778743 system_pods.go:61] "etcd-newest-cni-641599" [ca0d2519-e026-4dee-a3fb-ce7df13ee8fc] Running
	I1206 09:52:34.395255  778743 system_pods.go:61] "kindnet-kv2gc" [0f27b79f-29eb-4e3e-9a65-fbc2529e4f09] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:52:34.395262  778743 system_pods.go:61] "kube-apiserver-newest-cni-641599" [40559cd7-889e-49dd-9f65-0b5e9a543dc2] Running
	I1206 09:52:34.395272  778743 system_pods.go:61] "kube-controller-manager-newest-cni-641599" [a609bb41-7a15-4452-9140-a79c35a026c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:52:34.395294  778743 system_pods.go:61] "kube-proxy-fv54r" [b74c4162-c9cd-43a6-9a4a-2162b2899489] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:52:34.395304  778743 system_pods.go:61] "kube-scheduler-newest-cni-641599" [81daab83-11f5-44cb-982c-212001fe43a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:52:34.395309  778743 system_pods.go:61] "storage-provisioner" [4de61ac3-6403-4c30-9cea-246b6f8bc458] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:52:34.395319  778743 system_pods.go:74] duration metric: took 4.008824ms to wait for pod list to return data ...
	I1206 09:52:34.395326  778743 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:34.397781  778743 default_sa.go:45] found service account: "default"
	I1206 09:52:34.397804  778743 default_sa.go:55] duration metric: took 2.472195ms for default service account to be created ...
	I1206 09:52:34.397819  778743 kubeadm.go:587] duration metric: took 562.671247ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:52:34.397836  778743 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:34.442287  778743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:34.442320  778743 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:34.442334  778743 node_conditions.go:105] duration metric: took 44.493558ms to run NodePressure ...
	I1206 09:52:34.442347  778743 start.go:242] waiting for startup goroutines ...
	I1206 09:52:34.714855  778743 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-641599" context rescaled to 1 replicas
	I1206 09:52:34.714897  778743 start.go:247] waiting for cluster config update ...
	I1206 09:52:34.714911  778743 start.go:256] writing updated cluster config ...
	I1206 09:52:34.715173  778743 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:34.769780  778743 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:52:34.772739  778743 out.go:179] * Done! kubectl is now configured to use "newest-cni-641599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:52:23 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:23.16391112Z" level=info msg="Starting container: f691931809677ba9b460f8d65ef5f139e57f5f5c62cd9f5d754b652156a59152" id=802e8c93-351b-4f5f-8f8c-67d03d79d8fa name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:23 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:23.165978463Z" level=info msg="Started container" PID=1823 containerID=f691931809677ba9b460f8d65ef5f139e57f5f5c62cd9f5d754b652156a59152 description=kube-system/coredns-66bc5c9577-gpnjq/coredns id=802e8c93-351b-4f5f-8f8c-67d03d79d8fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=2da7a0a705373dc73e20564ee8495dac82a804c739567ad8b6c56951672573f1
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.721088548Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7b568571-7b1f-4d06-a12b-6d1205625d00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.721166612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.72660024Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1fb08569a9bb317d918ef43f5e0e3a389d639a2f5c489781b7ebc2b237cf4b6f UID:19743c1a-5c97-490a-bed0-702d9c410f3e NetNS:/var/run/netns/6c0ddd07-35d8-43e1-94aa-ba6f02d955ca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00028a8f0}] Aliases:map[]}"
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.726631636Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.736207936Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1fb08569a9bb317d918ef43f5e0e3a389d639a2f5c489781b7ebc2b237cf4b6f UID:19743c1a-5c97-490a-bed0-702d9c410f3e NetNS:/var/run/netns/6c0ddd07-35d8-43e1-94aa-ba6f02d955ca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00028a8f0}] Aliases:map[]}"
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.736350229Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.7370474Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.737954412Z" level=info msg="Ran pod sandbox 1fb08569a9bb317d918ef43f5e0e3a389d639a2f5c489781b7ebc2b237cf4b6f with infra container: default/busybox/POD" id=7b568571-7b1f-4d06-a12b-6d1205625d00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.739235593Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0442abe-ec83-4c17-8ee1-c7d9c2ee23eb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.739400016Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f0442abe-ec83-4c17-8ee1-c7d9c2ee23eb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.739452542Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f0442abe-ec83-4c17-8ee1-c7d9c2ee23eb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.74022844Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ef109c03-9bbb-46c9-93f8-fe2f092e70e0 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:52:25 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:25.743955536Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.892236682Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ef109c03-9bbb-46c9-93f8-fe2f092e70e0 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.893047838Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d551f7f8-614f-4875-8ace-9bb24307ad6f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.894539201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0007cb69-b58d-43ce-bab4-ad0e8077ffa0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.897622739Z" level=info msg="Creating container: default/busybox/busybox" id=8b89d43e-c701-442a-a881-fd0c64c95734 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.897758072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.902033932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.902682539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.936536213Z" level=info msg="Created container fd6289ef9850109bd148e16672403d4899146b7eaa9604a02c52d31a6c933cea: default/busybox/busybox" id=8b89d43e-c701-442a-a881-fd0c64c95734 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.937156607Z" level=info msg="Starting container: fd6289ef9850109bd148e16672403d4899146b7eaa9604a02c52d31a6c933cea" id=3480f2c4-fb7b-4e0d-9cf5-993cafaf8f01 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:27 default-k8s-diff-port-759696 crio[776]: time="2025-12-06T09:52:27.938901843Z" level=info msg="Started container" PID=1904 containerID=fd6289ef9850109bd148e16672403d4899146b7eaa9604a02c52d31a6c933cea description=default/busybox/busybox id=3480f2c4-fb7b-4e0d-9cf5-993cafaf8f01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1fb08569a9bb317d918ef43f5e0e3a389d639a2f5c489781b7ebc2b237cf4b6f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	fd6289ef98501       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   1fb08569a9bb3       busybox                                                default
	f691931809677       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   2da7a0a705373       coredns-66bc5c9577-gpnjq                               kube-system
	89fa2ec1be527       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   b383f28cb8e8f       storage-provisioner                                    kube-system
	3960f5cc45cc2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   0b3b4afec476c       kindnet-cv6n8                                          kube-system
	93d507cf206bf       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      24 seconds ago      Running             kube-proxy                0                   9a3f8cf72d93f       kube-proxy-jstq5                                       kube-system
	6a4f09fdf2361       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      35 seconds ago      Running             kube-scheduler            0                   efa6c95d60675       kube-scheduler-default-k8s-diff-port-759696            kube-system
	302e38fea838c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      35 seconds ago      Running             kube-controller-manager   0                   e0fedf4b022b0       kube-controller-manager-default-k8s-diff-port-759696   kube-system
	f664f274382f4       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      35 seconds ago      Running             kube-apiserver            0                   eeea3b9d47d24       kube-apiserver-default-k8s-diff-port-759696            kube-system
	a630474b505c5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   ed2294bd64fdc       etcd-default-k8s-diff-port-759696                      kube-system
	
	
	==> coredns [f691931809677ba9b460f8d65ef5f139e57f5f5c62cd9f5d754b652156a59152] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38868 - 43372 "HINFO IN 4394499508407985473.871618668102654570. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.031496886s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-759696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-759696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=default-k8s-diff-port-759696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-759696
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:52:22 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:52:22 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:52:22 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:52:22 +0000   Sat, 06 Dec 2025 09:52:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-759696
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                66717458-de25-4b46-9089-82e699ed1547
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-gpnjq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-759696                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-cv6n8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-759696             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-759696    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-jstq5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-759696             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-759696 event: Registered Node default-k8s-diff-port-759696 in Controller
	  Normal  NodeReady                15s                kubelet          Node default-k8s-diff-port-759696 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [a630474b505c5155485dbc94ef52662e61365de252055cc9f043136f7a2850a5] <==
	{"level":"warn","ts":"2025-12-06T09:52:02.906547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.915398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.924389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.935077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.955579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.965583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.974691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.984445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:02.992544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.002482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.010602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.020525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.029888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.039054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.050790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.062373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.076757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.084821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.095703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.127179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.137348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.147764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:03.218155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:15.804147Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.004759ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790495883610629 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-759696\" mod_revision:422 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-759696\" value_size:7250 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-759696\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:52:15.804322Z","caller":"traceutil/trace.go:172","msg":"trace[1910971045] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"296.637337ms","start":"2025-12-06T09:52:15.507659Z","end":"2025-12-06T09:52:15.804296Z","steps":["trace[1910971045] 'process raft request'  (duration: 163.105029ms)","trace[1910971045] 'compare'  (duration: 132.918255ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:52:37 up  2:35,  0 user,  load average: 4.56, 2.95, 3.25
	Linux default-k8s-diff-port-759696 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3960f5cc45cc27405eb40c3bf84bf1e95a7998dd0b5c652b3df0498e2e32e8ac] <==
	I1206 09:52:12.379682       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:52:12.379934       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:52:12.380086       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:52:12.380104       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:52:12.380140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:52:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:52:12.678137       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:52:12.678416       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:52:12.678443       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:52:12.679022       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:52:13.175103       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:52:13.175142       1 metrics.go:72] Registering metrics
	I1206 09:52:13.175244       1 controller.go:711] "Syncing nftables rules"
	I1206 09:52:22.680734       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:52:22.680802       1 main.go:301] handling current node
	I1206 09:52:32.681933       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:52:32.681997       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f664f274382f4ebb56dafee5ed158ee69807db357f18e369171300cf65934304] <==
	E1206 09:52:03.853248       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1206 09:52:03.903869       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:52:03.907521       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:03.907733       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1206 09:52:03.921810       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:03.921947       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:52:04.040088       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:52:04.698670       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:52:04.704013       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:52:04.704035       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:52:05.268243       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:52:05.316416       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:52:05.406900       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:52:05.413174       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1206 09:52:05.414341       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:52:05.418950       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:52:05.939190       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:52:06.579709       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:52:06.588965       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:52:06.597043       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:52:11.689675       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1206 09:52:11.793290       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:52:11.897540       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:11.904003       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1206 09:52:35.527964       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:48506: use of closed network connection
	
	
	==> kube-controller-manager [302e38fea838ce52e156ce92f2ab35cce7cda40862e0861db902b0ad2fc28e95] <==
	I1206 09:52:10.938729       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:52:10.938761       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:52:10.938793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:52:10.938731       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:52:10.938825       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-759696"
	I1206 09:52:10.938878       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1206 09:52:10.938952       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:52:10.938968       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:52:10.939629       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:52:10.939737       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:52:10.940833       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:52:10.940862       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:52:10.940918       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:52:10.942127       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:52:10.942935       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:52:10.950152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:52:10.961355       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:52:10.961438       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:52:10.961501       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:52:10.961516       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:52:10.961525       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:52:10.968065       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:52:10.972531       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-759696" podCIDRs=["10.244.0.0/24"]
	I1206 09:52:10.977824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:52:25.940862       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [93d507cf206bfdd6e66384e6d0b9bfdffb4402c2b01ed3504a60e0a0dc05fe34] <==
	I1206 09:52:12.180067       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:52:12.293142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:52:12.394038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:52:12.394089       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:52:12.394200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:52:12.416915       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:52:12.416993       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:52:12.428377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:52:12.428903       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:52:12.429356       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:52:12.435184       1 config.go:200] "Starting service config controller"
	I1206 09:52:12.439243       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:52:12.435610       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:52:12.435631       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:52:12.439318       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:52:12.439329       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:52:12.439364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:52:12.436140       1 config.go:309] "Starting node config controller"
	I1206 09:52:12.439491       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:52:12.439566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:52:12.439445       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:52:12.540424       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6a4f09fdf2361cc4ae4bdd8a459f48e56ef16904f07fc021425631ecb28ee32d] <==
	E1206 09:52:03.812046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:52:03.812122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:52:03.812178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:52:03.812269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:52:03.812328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:52:03.814665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:52:03.814887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:52:03.815017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:52:03.815108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:52:03.815181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:52:03.815804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:52:03.816321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:52:04.641152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:52:04.738909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:52:04.740149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:52:04.759877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:52:04.779539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:52:04.800634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:52:04.833014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:52:04.840806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:52:04.881226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:52:04.947053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:52:05.058419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:52:05.150252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:52:07.106527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:52:07 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:07.577109    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-759696" podStartSLOduration=1.577083368 podStartE2EDuration="1.577083368s" podCreationTimestamp="2025-12-06 09:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:07.562369194 +0000 UTC m=+1.210687518" watchObservedRunningTime="2025-12-06 09:52:07.577083368 +0000 UTC m=+1.225401691"
	Dec 06 09:52:07 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:07.593957    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-759696" podStartSLOduration=1.5939340739999999 podStartE2EDuration="1.593934074s" podCreationTimestamp="2025-12-06 09:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:07.579158315 +0000 UTC m=+1.227476637" watchObservedRunningTime="2025-12-06 09:52:07.593934074 +0000 UTC m=+1.242252397"
	Dec 06 09:52:07 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:07.594093    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-759696" podStartSLOduration=1.594088988 podStartE2EDuration="1.594088988s" podCreationTimestamp="2025-12-06 09:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:07.593506489 +0000 UTC m=+1.241824814" watchObservedRunningTime="2025-12-06 09:52:07.594088988 +0000 UTC m=+1.242407313"
	Dec 06 09:52:07 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:07.610802    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-759696" podStartSLOduration=1.610783961 podStartE2EDuration="1.610783961s" podCreationTimestamp="2025-12-06 09:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:07.606879952 +0000 UTC m=+1.255198278" watchObservedRunningTime="2025-12-06 09:52:07.610783961 +0000 UTC m=+1.259102287"
	Dec 06 09:52:10 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:10.997401    1312 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.000999    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.776693    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9d4f2bb-5c58-4876-9004-b91d6491059f-lib-modules\") pod \"kube-proxy-jstq5\" (UID: \"b9d4f2bb-5c58-4876-9004-b91d6491059f\") " pod="kube-system/kube-proxy-jstq5"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.777035    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6sbz\" (UniqueName: \"kubernetes.io/projected/b9d4f2bb-5c58-4876-9004-b91d6491059f-kube-api-access-c6sbz\") pod \"kube-proxy-jstq5\" (UID: \"b9d4f2bb-5c58-4876-9004-b91d6491059f\") " pod="kube-system/kube-proxy-jstq5"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.777098    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/16171d40-7e5a-470a-8865-3184dcdf759a-cni-cfg\") pod \"kindnet-cv6n8\" (UID: \"16171d40-7e5a-470a-8865-3184dcdf759a\") " pod="kube-system/kindnet-cv6n8"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.777120    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16171d40-7e5a-470a-8865-3184dcdf759a-lib-modules\") pod \"kindnet-cv6n8\" (UID: \"16171d40-7e5a-470a-8865-3184dcdf759a\") " pod="kube-system/kindnet-cv6n8"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.777144    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9d4f2bb-5c58-4876-9004-b91d6491059f-xtables-lock\") pod \"kube-proxy-jstq5\" (UID: \"b9d4f2bb-5c58-4876-9004-b91d6491059f\") " pod="kube-system/kube-proxy-jstq5"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.777167    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xb2f\" (UniqueName: \"kubernetes.io/projected/16171d40-7e5a-470a-8865-3184dcdf759a-kube-api-access-5xb2f\") pod \"kindnet-cv6n8\" (UID: \"16171d40-7e5a-470a-8865-3184dcdf759a\") " pod="kube-system/kindnet-cv6n8"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.777228    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9d4f2bb-5c58-4876-9004-b91d6491059f-kube-proxy\") pod \"kube-proxy-jstq5\" (UID: \"b9d4f2bb-5c58-4876-9004-b91d6491059f\") " pod="kube-system/kube-proxy-jstq5"
	Dec 06 09:52:11 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:11.777259    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16171d40-7e5a-470a-8865-3184dcdf759a-xtables-lock\") pod \"kindnet-cv6n8\" (UID: \"16171d40-7e5a-470a-8865-3184dcdf759a\") " pod="kube-system/kindnet-cv6n8"
	Dec 06 09:52:12 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:12.516848    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jstq5" podStartSLOduration=1.516819022 podStartE2EDuration="1.516819022s" podCreationTimestamp="2025-12-06 09:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:12.516646192 +0000 UTC m=+6.164964517" watchObservedRunningTime="2025-12-06 09:52:12.516819022 +0000 UTC m=+6.165137349"
	Dec 06 09:52:12 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:12.516999    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cv6n8" podStartSLOduration=1.516988801 podStartE2EDuration="1.516988801s" podCreationTimestamp="2025-12-06 09:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:12.503602523 +0000 UTC m=+6.151920848" watchObservedRunningTime="2025-12-06 09:52:12.516988801 +0000 UTC m=+6.165307126"
	Dec 06 09:52:22 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:22.787446    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:52:22 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:22.858865    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6wg\" (UniqueName: \"kubernetes.io/projected/35b5ac9a-54cb-43da-9e91-3126be5a1e48-kube-api-access-mt6wg\") pod \"storage-provisioner\" (UID: \"35b5ac9a-54cb-43da-9e91-3126be5a1e48\") " pod="kube-system/storage-provisioner"
	Dec 06 09:52:22 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:22.858932    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/35b5ac9a-54cb-43da-9e91-3126be5a1e48-tmp\") pod \"storage-provisioner\" (UID: \"35b5ac9a-54cb-43da-9e91-3126be5a1e48\") " pod="kube-system/storage-provisioner"
	Dec 06 09:52:22 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:22.858986    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bfbb94-ba21-443d-ab29-f519f4d70c64-config-volume\") pod \"coredns-66bc5c9577-gpnjq\" (UID: \"a0bfbb94-ba21-443d-ab29-f519f4d70c64\") " pod="kube-system/coredns-66bc5c9577-gpnjq"
	Dec 06 09:52:22 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:22.859029    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kthkl\" (UniqueName: \"kubernetes.io/projected/a0bfbb94-ba21-443d-ab29-f519f4d70c64-kube-api-access-kthkl\") pod \"coredns-66bc5c9577-gpnjq\" (UID: \"a0bfbb94-ba21-443d-ab29-f519f4d70c64\") " pod="kube-system/coredns-66bc5c9577-gpnjq"
	Dec 06 09:52:23 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:23.540402    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gpnjq" podStartSLOduration=11.540336502 podStartE2EDuration="11.540336502s" podCreationTimestamp="2025-12-06 09:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:23.526802242 +0000 UTC m=+17.175120565" watchObservedRunningTime="2025-12-06 09:52:23.540336502 +0000 UTC m=+17.188654827"
	Dec 06 09:52:23 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:23.552348    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.552327898 podStartE2EDuration="11.552327898s" podCreationTimestamp="2025-12-06 09:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:23.540881056 +0000 UTC m=+17.189199381" watchObservedRunningTime="2025-12-06 09:52:23.552327898 +0000 UTC m=+17.200646223"
	Dec 06 09:52:25 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:25.479836    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2mzc\" (UniqueName: \"kubernetes.io/projected/19743c1a-5c97-490a-bed0-702d9c410f3e-kube-api-access-v2mzc\") pod \"busybox\" (UID: \"19743c1a-5c97-490a-bed0-702d9c410f3e\") " pod="default/busybox"
	Dec 06 09:52:28 default-k8s-diff-port-759696 kubelet[1312]: I1206 09:52:28.541950    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.387801147 podStartE2EDuration="3.541929915s" podCreationTimestamp="2025-12-06 09:52:25 +0000 UTC" firstStartedPulling="2025-12-06 09:52:25.739796324 +0000 UTC m=+19.388114628" lastFinishedPulling="2025-12-06 09:52:27.893925076 +0000 UTC m=+21.542243396" observedRunningTime="2025-12-06 09:52:28.54158053 +0000 UTC m=+22.189898856" watchObservedRunningTime="2025-12-06 09:52:28.541929915 +0000 UTC m=+22.190248235"
	
	
	==> storage-provisioner [89fa2ec1be527af6a44a9b044459416232ac38d2f2c2bcb1be6f0ea22f5e9a4e] <==
	I1206 09:52:23.169542       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:52:23.178343       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:52:23.178399       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:52:23.180844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:23.185333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:52:23.185515       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:52:23.185632       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"319c0bdb-ab5a-4a15-8303-dcd154877547", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-759696_10ac2cbd-3e77-472e-8b85-c4354c925232 became leader
	I1206 09:52:23.185651       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759696_10ac2cbd-3e77-472e-8b85-c4354c925232!
	W1206 09:52:23.187895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:23.193317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:52:23.286077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759696_10ac2cbd-3e77-472e-8b85-c4354c925232!
	W1206 09:52:25.197075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:25.202396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:27.206299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:27.210594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:29.215085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:29.219978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:31.223694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:31.228609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:33.233073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:33.238902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:35.241910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:35.246363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:37.250126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:37.255544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-759696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (426.165704ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:52:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-997968 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-997968 describe deploy/metrics-server -n kube-system: exit status 1 (79.683908ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-997968 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-997968
helpers_test.go:243: (dbg) docker inspect embed-certs-997968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062",
	        "Created": "2025-12-06T09:51:52.675095642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 772671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:51:52.715585857Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/hosts",
	        "LogPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062-json.log",
	        "Name": "/embed-certs-997968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-997968:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-997968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062",
	                "LowerDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-997968",
	                "Source": "/var/lib/docker/volumes/embed-certs-997968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-997968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-997968",
	                "name.minikube.sigs.k8s.io": "embed-certs-997968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "221bbf4e4302a78335e15423b94749c4ac901fe44fc5d307867b449c48f42640",
	            "SandboxKey": "/var/run/docker/netns/221bbf4e4302",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-997968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d9447c39c3ca701200d25c23e931e64eec9678dd597d8d4ca10d4b524dddd69",
	                    "EndpointID": "b169d700a33200d719bb3c2d99ea14bbfc86d8fd96098bfd00eccea4c997bafb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ce:0b:80:19:a6:a7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-997968",
	                        "0e3f6d38a916"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-997968 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-997968 logs -n 25: (1.062395378s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:50 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p cert-expiration-669264                                                                                                                                                                                                                            │ cert-expiration-669264       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ image   │ old-k8s-version-507108 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ pause   │ -p old-k8s-version-507108 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p kubernetes-upgrade-581224                                                                                                                                                                                                                         │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-920129                                                                                                                                                                                                                      │ disable-driver-mounts-920129 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p no-preload-521770 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p stopped-upgrade-031481                                                                                                                                                                                                                            │ stopped-upgrade-031481       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:52:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:52:24.230296  782026 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:52:24.230422  782026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:24.230432  782026 out.go:374] Setting ErrFile to fd 2...
	I1206 09:52:24.230439  782026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:24.230661  782026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:52:24.231206  782026 out.go:368] Setting JSON to false
	I1206 09:52:24.232660  782026 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9288,"bootTime":1765005456,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:52:24.232745  782026 start.go:143] virtualization: kvm guest
	I1206 09:52:24.234621  782026 out.go:179] * [no-preload-521770] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:52:24.235951  782026 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:52:24.235969  782026 notify.go:221] Checking for updates...
	I1206 09:52:24.238001  782026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:52:24.239277  782026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:24.240424  782026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:52:24.241537  782026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:52:24.243281  782026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:52:24.245035  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:24.245892  782026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:52:24.276543  782026 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:52:24.276704  782026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:24.353259  782026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:52:24.340425815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:52:24.353400  782026 docker.go:319] overlay module found
	I1206 09:52:24.356378  782026 out.go:179] * Using the docker driver based on existing profile
	I1206 09:52:24.357384  782026 start.go:309] selected driver: docker
	I1206 09:52:24.357403  782026 start.go:927] validating driver "docker" against &{Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:24.357556  782026 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:52:24.358245  782026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:24.428901  782026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:52:24.419008447 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:52:24.429187  782026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:24.429223  782026 cni.go:84] Creating CNI manager for ""
	I1206 09:52:24.429316  782026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:24.429384  782026 start.go:353] cluster config:
	{Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:24.431915  782026 out.go:179] * Starting "no-preload-521770" primary control-plane node in "no-preload-521770" cluster
	I1206 09:52:24.432866  782026 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:52:24.433846  782026 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:52:24.434780  782026 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:52:24.434892  782026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:52:24.434902  782026 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/config.json ...
	I1206 09:52:24.435051  782026 cache.go:107] acquiring lock: {Name:mk3f028e80f8ac87cdcd24320d70e36a894791c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435140  782026 cache.go:107] acquiring lock: {Name:mkdc523156a072e4947d577065578e91a9732b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435195  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1206 09:52:24.435150  782026 cache.go:107] acquiring lock: {Name:mke4ba1139ae959d606dd38112efde7d4d448b97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435205  782026 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 72.054µs
	I1206 09:52:24.435222  782026 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 09:52:24.435195  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 09:52:24.435241  782026 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 196.366µs
	I1206 09:52:24.435210  782026 cache.go:107] acquiring lock: {Name:mkd3b5a28f8041fde0d80c5102632df37b913591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435260  782026 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435276  782026 cache.go:107] acquiring lock: {Name:mk715c193fee45ce0be781bde9149a4d7c68db76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435277  782026 cache.go:107] acquiring lock: {Name:mkacf44d4c7d284d9b31511b6f07c1d37c06e59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435307  782026 cache.go:107] acquiring lock: {Name:mk06fdc2189bb8fbdd9f705d1a497d61567fd9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435321  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 09:52:24.435319  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 09:52:24.435328  782026 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 55.543µs
	I1206 09:52:24.435337  782026 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 09:52:24.435334  782026 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 245.346µs
	I1206 09:52:24.435326  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 09:52:24.435346  782026 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 09:52:24.435347  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 09:52:24.435347  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 09:52:24.435357  782026 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 151.015µs
	I1206 09:52:24.435350  782026 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 76.006µs
	I1206 09:52:24.435367  782026 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435369  782026 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435049  782026 cache.go:107] acquiring lock: {Name:mke865bc2a308b5226070dc1deef9b7218b9996f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.435428  782026 cache.go:115] /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 09:52:24.435435  782026 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 401.873µs
	I1206 09:52:24.435442  782026 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 09:52:24.435445  782026 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 56.266µs
	I1206 09:52:24.435479  782026 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 09:52:24.435539  782026 cache.go:87] Successfully saved all images to host disk.
	I1206 09:52:24.458045  782026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:52:24.458074  782026 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:52:24.458091  782026 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:52:24.458128  782026 start.go:360] acquireMachinesLock for no-preload-521770: {Name:mkf85c9fe05269c67d1e37d10022df9548bf23d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:24.458195  782026 start.go:364] duration metric: took 47.288µs to acquireMachinesLock for "no-preload-521770"
	I1206 09:52:24.458214  782026 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:52:24.458221  782026 fix.go:54] fixHost starting: 
	I1206 09:52:24.458538  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:24.478477  782026 fix.go:112] recreateIfNeeded on no-preload-521770: state=Stopped err=<nil>
	W1206 09:52:24.478528  782026 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 09:52:22.733773  771291 node_ready.go:57] node "default-k8s-diff-port-759696" has "Ready":"False" status (will retry)
	I1206 09:52:23.232972  771291 node_ready.go:49] node "default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.233002  771291 node_ready.go:38] duration metric: took 11.002657942s for node "default-k8s-diff-port-759696" to be "Ready" ...
	I1206 09:52:23.233017  771291 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:23.233074  771291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:23.246060  771291 api_server.go:72] duration metric: took 11.380999717s to wait for apiserver process to appear ...
	I1206 09:52:23.246087  771291 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:23.246110  771291 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1206 09:52:23.250298  771291 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1206 09:52:23.251303  771291 api_server.go:141] control plane version: v1.34.2
	I1206 09:52:23.251332  771291 api_server.go:131] duration metric: took 5.237123ms to wait for apiserver health ...
	I1206 09:52:23.251343  771291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:23.255051  771291 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:23.255095  771291 system_pods.go:61] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:23.255118  771291 system_pods.go:61] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.255131  771291 system_pods.go:61] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.255144  771291 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.255151  771291 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.255160  771291 system_pods.go:61] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.255167  771291 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.255177  771291 system_pods.go:61] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:23.255191  771291 system_pods.go:74] duration metric: took 3.838741ms to wait for pod list to return data ...
	I1206 09:52:23.255204  771291 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:23.257429  771291 default_sa.go:45] found service account: "default"
	I1206 09:52:23.257448  771291 default_sa.go:55] duration metric: took 2.236469ms for default service account to be created ...
	I1206 09:52:23.257484  771291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:52:23.260094  771291 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:23.260118  771291 system_pods.go:89] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:23.260123  771291 system_pods.go:89] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.260172  771291 system_pods.go:89] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.260176  771291 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.260180  771291 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.260187  771291 system_pods.go:89] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.260190  771291 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.260198  771291 system_pods.go:89] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:23.260226  771291 retry.go:31] will retry after 301.255841ms: missing components: kube-dns
	I1206 09:52:23.564931  771291 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:23.564969  771291 system_pods.go:89] "coredns-66bc5c9577-gpnjq" [a0bfbb94-ba21-443d-ab29-f519f4d70c64] Running
	I1206 09:52:23.564978  771291 system_pods.go:89] "etcd-default-k8s-diff-port-759696" [169c7fea-496c-4db1-9fef-e499e38ec7a1] Running
	I1206 09:52:23.564985  771291 system_pods.go:89] "kindnet-cv6n8" [16171d40-7e5a-470a-8865-3184dcdf759a] Running
	I1206 09:52:23.564990  771291 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759696" [cfd0902c-97a9-49ef-9444-7a6c40e3e9d9] Running
	I1206 09:52:23.564997  771291 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759696" [3092418b-448a-4fb6-aa0e-6eebe595b286] Running
	I1206 09:52:23.565002  771291 system_pods.go:89] "kube-proxy-jstq5" [b9d4f2bb-5c58-4876-9004-b91d6491059f] Running
	I1206 09:52:23.565007  771291 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759696" [a919e152-5891-4b38-b802-9f54054ec00d] Running
	I1206 09:52:23.565012  771291 system_pods.go:89] "storage-provisioner" [35b5ac9a-54cb-43da-9e91-3126be5a1e48] Running
	I1206 09:52:23.565023  771291 system_pods.go:126] duration metric: took 307.529453ms to wait for k8s-apps to be running ...
	I1206 09:52:23.565037  771291 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:52:23.565093  771291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:23.578827  771291 system_svc.go:56] duration metric: took 13.778342ms WaitForService to wait for kubelet
	I1206 09:52:23.578859  771291 kubeadm.go:587] duration metric: took 11.713805961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:23.578882  771291 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:23.581992  771291 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:23.582044  771291 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:23.582067  771291 node_conditions.go:105] duration metric: took 3.178425ms to run NodePressure ...
	I1206 09:52:23.582093  771291 start.go:242] waiting for startup goroutines ...
	I1206 09:52:23.582106  771291 start.go:247] waiting for cluster config update ...
	I1206 09:52:23.582126  771291 start.go:256] writing updated cluster config ...
	I1206 09:52:23.582452  771291 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:23.588357  771291 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:23.664775  771291 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.669577  771291 pod_ready.go:94] pod "coredns-66bc5c9577-gpnjq" is "Ready"
	I1206 09:52:23.669603  771291 pod_ready.go:86] duration metric: took 4.791126ms for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.671822  771291 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.675792  771291 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.675811  771291 pod_ready.go:86] duration metric: took 3.966323ms for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.677683  771291 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.681360  771291 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.681380  771291 pod_ready.go:86] duration metric: took 3.676297ms for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.683330  771291 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:23.993645  771291 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:23.993670  771291 pod_ready.go:86] duration metric: took 310.321581ms for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.194283  771291 pod_ready.go:83] waiting for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.594004  771291 pod_ready.go:94] pod "kube-proxy-jstq5" is "Ready"
	I1206 09:52:24.594047  771291 pod_ready.go:86] duration metric: took 399.738837ms for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:24.795328  771291 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:25.193288  771291 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759696" is "Ready"
	I1206 09:52:25.193321  771291 pod_ready.go:86] duration metric: took 397.96695ms for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:25.193336  771291 pod_ready.go:40] duration metric: took 1.604949342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:25.245818  771291 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:52:25.247685  771291 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759696" cluster and "default" namespace by default
	W1206 09:52:22.907887  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	W1206 09:52:24.909512  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	I1206 09:52:28.881377  778743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:52:28.881427  778743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:52:28.881601  778743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:52:28.881695  778743 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:52:28.881749  778743 kubeadm.go:319] OS: Linux
	I1206 09:52:28.881841  778743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:52:28.881928  778743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:52:28.882000  778743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:52:28.882049  778743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:52:28.882132  778743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:52:28.882210  778743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:52:28.882277  778743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:52:28.882345  778743 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:52:28.882436  778743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:52:28.882576  778743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:52:28.882704  778743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:52:28.882775  778743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:52:28.884431  778743 out.go:252]   - Generating certificates and keys ...
	I1206 09:52:28.884529  778743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:52:28.884636  778743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:52:28.884748  778743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:52:28.884841  778743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:52:28.884943  778743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:52:28.885024  778743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:52:28.885100  778743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:52:28.885280  778743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-641599] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:52:28.885358  778743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:52:28.885556  778743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-641599] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:52:28.885624  778743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:52:28.885735  778743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:52:28.885800  778743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:52:28.885872  778743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:52:28.885933  778743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:52:28.885985  778743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:52:28.886031  778743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:52:28.886092  778743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:52:28.886144  778743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:52:28.886235  778743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:52:28.886303  778743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:52:28.887431  778743 out.go:252]   - Booting up control plane ...
	I1206 09:52:28.887524  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:52:28.887617  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:52:28.887698  778743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:52:28.887847  778743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:52:28.887990  778743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:52:28.888099  778743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:52:28.888175  778743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:52:28.888229  778743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:52:28.888348  778743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:52:28.888468  778743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:52:28.888545  778743 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.352933ms
	I1206 09:52:28.888691  778743 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:52:28.888800  778743 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:52:28.888930  778743 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:52:28.889053  778743 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:52:28.889201  778743 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005929457s
	I1206 09:52:28.889315  778743 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.86815486s
	I1206 09:52:28.889418  778743 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502032967s
	I1206 09:52:28.889585  778743 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:52:28.889700  778743 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:52:28.889754  778743 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:52:28.889915  778743 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-641599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:52:28.889974  778743 kubeadm.go:319] [bootstrap-token] Using token: w8ash3.bz5dwngp2dkzla91
	I1206 09:52:28.891194  778743 out.go:252]   - Configuring RBAC rules ...
	I1206 09:52:28.891287  778743 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:52:28.891362  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:52:28.891577  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:52:28.891727  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:52:28.891834  778743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:52:28.891911  778743 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:52:28.892085  778743 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:52:28.892135  778743 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:52:28.892202  778743 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:52:28.892211  778743 kubeadm.go:319] 
	I1206 09:52:28.892294  778743 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:52:28.892309  778743 kubeadm.go:319] 
	I1206 09:52:28.892420  778743 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:52:28.892429  778743 kubeadm.go:319] 
	I1206 09:52:28.892488  778743 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:52:28.892587  778743 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:52:28.892635  778743 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:52:28.892639  778743 kubeadm.go:319] 
	I1206 09:52:28.892698  778743 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:52:28.892710  778743 kubeadm.go:319] 
	I1206 09:52:28.892782  778743 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:52:28.892791  778743 kubeadm.go:319] 
	I1206 09:52:28.892852  778743 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:52:28.892929  778743 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:52:28.892989  778743 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:52:28.892995  778743 kubeadm.go:319] 
	I1206 09:52:28.893066  778743 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:52:28.893153  778743 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:52:28.893165  778743 kubeadm.go:319] 
	I1206 09:52:28.893263  778743 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w8ash3.bz5dwngp2dkzla91 \
	I1206 09:52:28.893386  778743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:52:28.893421  778743 kubeadm.go:319] 	--control-plane 
	I1206 09:52:28.893430  778743 kubeadm.go:319] 
	I1206 09:52:28.893539  778743 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:52:28.893551  778743 kubeadm.go:319] 
	I1206 09:52:28.893668  778743 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w8ash3.bz5dwngp2dkzla91 \
	I1206 09:52:28.893821  778743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:52:28.893837  778743 cni.go:84] Creating CNI manager for ""
	I1206 09:52:28.893846  778743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:28.895223  778743 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:52:24.480389  782026 out.go:252] * Restarting existing docker container for "no-preload-521770" ...
	I1206 09:52:24.480470  782026 cli_runner.go:164] Run: docker start no-preload-521770
	I1206 09:52:24.745004  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:24.764399  782026 kic.go:430] container "no-preload-521770" state is running.
	I1206 09:52:24.764844  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:24.783728  782026 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/config.json ...
	I1206 09:52:24.784054  782026 machine.go:94] provisionDockerMachine start ...
	I1206 09:52:24.784143  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:24.805518  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:24.805838  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:24.805858  782026 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:52:24.806622  782026 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32898->127.0.0.1:33211: read: connection reset by peer
	I1206 09:52:27.951364  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-521770
	
	I1206 09:52:27.951393  782026 ubuntu.go:182] provisioning hostname "no-preload-521770"
	I1206 09:52:27.951451  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:27.971393  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:27.971652  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:27.971668  782026 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-521770 && echo "no-preload-521770" | sudo tee /etc/hostname
	I1206 09:52:28.116596  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-521770
	
	I1206 09:52:28.116743  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.145696  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:28.146039  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:28.146078  782026 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521770/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:52:28.277194  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:52:28.277232  782026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:52:28.277279  782026 ubuntu.go:190] setting up certificates
	I1206 09:52:28.277296  782026 provision.go:84] configureAuth start
	I1206 09:52:28.277367  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:28.297825  782026 provision.go:143] copyHostCerts
	I1206 09:52:28.297892  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:52:28.297904  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:52:28.297965  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:52:28.298076  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:52:28.298087  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:52:28.298116  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:52:28.298173  782026 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:52:28.298181  782026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:52:28.298204  782026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:52:28.298263  782026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.no-preload-521770 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-521770]
	I1206 09:52:28.338699  782026 provision.go:177] copyRemoteCerts
	I1206 09:52:28.338753  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:52:28.338786  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.359141  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:28.454191  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:52:28.473624  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:52:28.491855  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:52:28.510074  782026 provision.go:87] duration metric: took 232.761897ms to configureAuth
	I1206 09:52:28.510100  782026 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:52:28.510273  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:28.510386  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.530157  782026 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:28.530466  782026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1206 09:52:28.530510  782026 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:52:28.858502  782026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:52:28.858532  782026 machine.go:97] duration metric: took 4.074459793s to provisionDockerMachine
	I1206 09:52:28.858548  782026 start.go:293] postStartSetup for "no-preload-521770" (driver="docker")
	I1206 09:52:28.858563  782026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:52:28.858636  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:52:28.858705  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:28.878915  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:28.979184  782026 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:52:28.983787  782026 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:52:28.983819  782026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:52:28.983832  782026 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:52:28.983889  782026 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:52:28.983970  782026 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:52:28.984063  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:52:28.992522  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:52:29.014583  782026 start.go:296] duration metric: took 156.016922ms for postStartSetup
	I1206 09:52:29.014683  782026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:52:29.014736  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.034344  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.129648  782026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:52:29.135314  782026 fix.go:56] duration metric: took 4.677087094s for fixHost
	I1206 09:52:29.135342  782026 start.go:83] releasing machines lock for "no-preload-521770", held for 4.677136228s
	I1206 09:52:29.135410  782026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-521770
	I1206 09:52:29.162339  782026 ssh_runner.go:195] Run: cat /version.json
	I1206 09:52:29.162396  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.162642  782026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:52:29.162728  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:29.185520  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.186727  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:29.349551  782026 ssh_runner.go:195] Run: systemctl --version
	I1206 09:52:29.358158  782026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:52:29.394015  782026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:52:29.398851  782026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:52:29.398921  782026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:52:29.407814  782026 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:52:29.407837  782026 start.go:496] detecting cgroup driver to use...
	I1206 09:52:29.407872  782026 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:52:29.407930  782026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:52:29.423937  782026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:52:29.438135  782026 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:52:29.438206  782026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:52:29.455656  782026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:52:29.469279  782026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:52:29.552150  782026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:52:29.653579  782026 docker.go:234] disabling docker service ...
	I1206 09:52:29.653654  782026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:52:29.673000  782026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:52:29.690524  782026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:52:29.786324  782026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:52:29.872262  782026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:52:29.885199  782026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:52:29.900924  782026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:52:29.900982  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.910821  782026 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:52:29.910889  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.919823  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.929149  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.938262  782026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:52:29.946657  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.955647  782026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.964498  782026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:52:29.973324  782026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:52:29.980560  782026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:52:29.987564  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:30.070090  782026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:52:30.217146  782026 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:52:30.217230  782026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:52:30.222028  782026 start.go:564] Will wait 60s for crictl version
	I1206 09:52:30.222111  782026 ssh_runner.go:195] Run: which crictl
	I1206 09:52:30.226345  782026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:52:30.254418  782026 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:52:30.254555  782026 ssh_runner.go:195] Run: crio --version
	I1206 09:52:30.293642  782026 ssh_runner.go:195] Run: crio --version
	I1206 09:52:30.325962  782026 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	W1206 09:52:27.407536  771042 node_ready.go:57] node "embed-certs-997968" has "Ready":"False" status (will retry)
	I1206 09:52:29.407975  771042 node_ready.go:49] node "embed-certs-997968" is "Ready"
	I1206 09:52:29.408008  771042 node_ready.go:38] duration metric: took 11.003669422s for node "embed-certs-997968" to be "Ready" ...
	I1206 09:52:29.408027  771042 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:29.408075  771042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:29.420440  771042 api_server.go:72] duration metric: took 11.320537531s to wait for apiserver process to appear ...
	I1206 09:52:29.420477  771042 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:29.420500  771042 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:52:29.425249  771042 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 09:52:29.426270  771042 api_server.go:141] control plane version: v1.34.2
	I1206 09:52:29.426306  771042 api_server.go:131] duration metric: took 5.819336ms to wait for apiserver health ...
	I1206 09:52:29.426317  771042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:29.429954  771042 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:29.429999  771042 system_pods.go:61] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.430021  771042 system_pods.go:61] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.430033  771042 system_pods.go:61] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.430039  771042 system_pods.go:61] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.430044  771042 system_pods.go:61] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.430050  771042 system_pods.go:61] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.430054  771042 system_pods.go:61] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.430065  771042 system_pods.go:61] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.430084  771042 system_pods.go:74] duration metric: took 3.759477ms to wait for pod list to return data ...
	I1206 09:52:29.430098  771042 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:29.432724  771042 default_sa.go:45] found service account: "default"
	I1206 09:52:29.432745  771042 default_sa.go:55] duration metric: took 2.638778ms for default service account to be created ...
	I1206 09:52:29.432752  771042 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:52:29.435843  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:29.435879  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.435898  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.435906  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.435918  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.435928  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.435934  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.435940  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.435950  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.435977  771042 retry.go:31] will retry after 248.263392ms: missing components: kube-dns
	I1206 09:52:29.688876  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:29.688919  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:29.688928  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:29.688936  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:29.688941  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:29.688948  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:29.688953  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:29.688958  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:29.688965  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:29.689163  771042 retry.go:31] will retry after 320.128103ms: missing components: kube-dns
	I1206 09:52:30.018114  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:30.018163  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:30.018172  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:30.018184  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:30.018189  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:30.018203  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:30.018209  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:30.018214  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:30.018220  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:30.018241  771042 retry.go:31] will retry after 435.909841ms: missing components: kube-dns
	I1206 09:52:30.459320  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:30.459353  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:30.459361  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:30.459367  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:30.459371  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:30.459375  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:30.459378  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:30.459382  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:30.459390  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:30.459410  771042 retry.go:31] will retry after 560.042985ms: missing components: kube-dns
	I1206 09:52:30.327261  782026 cli_runner.go:164] Run: docker network inspect no-preload-521770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:52:30.345176  782026 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:52:30.349559  782026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:30.361149  782026 kubeadm.go:884] updating cluster {Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:52:30.361304  782026 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:52:30.361352  782026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:52:30.394137  782026 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:52:30.394157  782026 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:52:30.394165  782026 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:52:30.394264  782026 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-521770 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:52:30.394337  782026 ssh_runner.go:195] Run: crio config
	I1206 09:52:30.445676  782026 cni.go:84] Creating CNI manager for ""
	I1206 09:52:30.445701  782026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:52:30.445721  782026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:52:30.445751  782026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521770 NodeName:no-preload-521770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:52:30.445918  782026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521770"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:52:30.446005  782026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:52:30.455026  782026 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:52:30.455103  782026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:52:30.465415  782026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:52:30.479189  782026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:52:30.492386  782026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 09:52:30.505928  782026 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:52:30.510294  782026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:30.520891  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:30.616935  782026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:30.643233  782026 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770 for IP: 192.168.94.2
	I1206 09:52:30.643253  782026 certs.go:195] generating shared ca certs ...
	I1206 09:52:30.643270  782026 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:30.643417  782026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:52:30.643475  782026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:52:30.643487  782026 certs.go:257] generating profile certs ...
	I1206 09:52:30.643572  782026 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/client.key
	I1206 09:52:30.643626  782026 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.key.1f412e4b
	I1206 09:52:30.643661  782026 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.key
	I1206 09:52:30.643767  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:52:30.643797  782026 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:52:30.643807  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:52:30.643835  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:52:30.643858  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:52:30.643882  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:52:30.643923  782026 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:52:30.644530  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:52:30.663921  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:52:30.683993  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:52:30.703729  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:52:30.728228  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:52:30.750676  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:52:30.770880  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:52:30.791391  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/no-preload-521770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:52:30.810670  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:52:30.829739  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:52:30.848932  782026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:52:30.867931  782026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:52:30.880912  782026 ssh_runner.go:195] Run: openssl version
	I1206 09:52:30.887389  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.895646  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:52:30.903571  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.907605  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.907658  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:30.943625  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:52:30.951924  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.959877  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:52:30.967663  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.971326  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:52:30.971370  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:52:31.007142  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:52:31.015641  782026 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.024412  782026 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:52:31.032674  782026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.036926  782026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.036985  782026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:52:31.082560  782026 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:52:31.090907  782026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:52:31.095147  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:52:31.132949  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:52:31.180275  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:52:31.227799  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:52:31.286217  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:52:31.341886  782026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:52:31.383881  782026 kubeadm.go:401] StartCluster: {Name:no-preload-521770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-521770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:31.383997  782026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:52:31.384066  782026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:52:31.416718  782026 cri.go:89] found id: "9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0"
	I1206 09:52:31.416742  782026 cri.go:89] found id: "4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5"
	I1206 09:52:31.416748  782026 cri.go:89] found id: "1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692"
	I1206 09:52:31.416753  782026 cri.go:89] found id: "585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068"
	I1206 09:52:31.416758  782026 cri.go:89] found id: ""
	I1206 09:52:31.416811  782026 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:52:31.431324  782026 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:52:31Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:52:31.431427  782026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:52:31.441532  782026 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:52:31.441554  782026 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:52:31.441600  782026 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:52:31.451039  782026 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:52:31.451930  782026 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-521770" does not appear in /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:31.452547  782026 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-499330/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-521770" cluster setting kubeconfig missing "no-preload-521770" context setting]
	I1206 09:52:31.453822  782026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.456004  782026 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:52:31.465352  782026 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1206 09:52:31.465394  782026 kubeadm.go:602] duration metric: took 23.833546ms to restartPrimaryControlPlane
	I1206 09:52:31.465406  782026 kubeadm.go:403] duration metric: took 81.54039ms to StartCluster
	I1206 09:52:31.465427  782026 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.465520  782026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:31.468310  782026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:31.468628  782026 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:52:31.468678  782026 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:52:31.468786  782026 addons.go:70] Setting storage-provisioner=true in profile "no-preload-521770"
	I1206 09:52:31.468806  782026 addons.go:239] Setting addon storage-provisioner=true in "no-preload-521770"
	W1206 09:52:31.468818  782026 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:52:31.468847  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.468862  782026 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:31.468867  782026 addons.go:70] Setting dashboard=true in profile "no-preload-521770"
	I1206 09:52:31.468883  782026 addons.go:70] Setting default-storageclass=true in profile "no-preload-521770"
	I1206 09:52:31.468914  782026 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521770"
	I1206 09:52:31.468892  782026 addons.go:239] Setting addon dashboard=true in "no-preload-521770"
	W1206 09:52:31.469002  782026 addons.go:248] addon dashboard should already be in state true
	I1206 09:52:31.469030  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.469241  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.469323  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.469518  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.471225  782026 out.go:179] * Verifying Kubernetes components...
	I1206 09:52:31.474638  782026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:31.493357  782026 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:52:31.493374  782026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:31.494645  782026 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:31.494664  782026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:52:31.494695  782026 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:52:28.896370  778743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:52:28.900738  778743 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1206 09:52:28.900757  778743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:52:28.915819  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:52:29.131536  778743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:52:29.131597  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:29.131646  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-641599 minikube.k8s.io/updated_at=2025_12_06T09_52_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=newest-cni-641599 minikube.k8s.io/primary=true
	I1206 09:52:29.144444  778743 ops.go:34] apiserver oom_adj: -16
	I1206 09:52:29.227957  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:29.728684  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:30.228791  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:30.728381  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.228935  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.728685  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:31.024277  771042 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:31.024322  771042 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Running
	I1206 09:52:31.024332  771042 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running
	I1206 09:52:31.024337  771042 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:52:31.024343  771042 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running
	I1206 09:52:31.024349  771042 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running
	I1206 09:52:31.024355  771042 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:52:31.024360  771042 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running
	I1206 09:52:31.024370  771042 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Running
	I1206 09:52:31.024380  771042 system_pods.go:126] duration metric: took 1.591621605s to wait for k8s-apps to be running ...
	I1206 09:52:31.024393  771042 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:52:31.024440  771042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:31.038119  771042 system_svc.go:56] duration metric: took 13.715424ms WaitForService to wait for kubelet
	I1206 09:52:31.038150  771042 kubeadm.go:587] duration metric: took 12.938253131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:31.038185  771042 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:31.041178  771042 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:31.041209  771042 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:31.041227  771042 node_conditions.go:105] duration metric: took 3.034732ms to run NodePressure ...
	I1206 09:52:31.041253  771042 start.go:242] waiting for startup goroutines ...
	I1206 09:52:31.041267  771042 start.go:247] waiting for cluster config update ...
	I1206 09:52:31.041282  771042 start.go:256] writing updated cluster config ...
	I1206 09:52:31.041621  771042 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:31.045304  771042 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:31.049252  771042 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.053658  771042 pod_ready.go:94] pod "coredns-66bc5c9577-kw8nl" is "Ready"
	I1206 09:52:31.053679  771042 pod_ready.go:86] duration metric: took 4.401998ms for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.055651  771042 pod_ready.go:83] waiting for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.059893  771042 pod_ready.go:94] pod "etcd-embed-certs-997968" is "Ready"
	I1206 09:52:31.059916  771042 pod_ready.go:86] duration metric: took 4.242092ms for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.061937  771042 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.065821  771042 pod_ready.go:94] pod "kube-apiserver-embed-certs-997968" is "Ready"
	I1206 09:52:31.065838  771042 pod_ready.go:86] duration metric: took 3.881454ms for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.067804  771042 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.450772  771042 pod_ready.go:94] pod "kube-controller-manager-embed-certs-997968" is "Ready"
	I1206 09:52:31.450805  771042 pod_ready.go:86] duration metric: took 382.979811ms for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:31.650858  771042 pod_ready.go:83] waiting for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.050643  771042 pod_ready.go:94] pod "kube-proxy-m2zpr" is "Ready"
	I1206 09:52:32.050679  771042 pod_ready.go:86] duration metric: took 399.791241ms for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.251448  771042 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.651295  771042 pod_ready.go:94] pod "kube-scheduler-embed-certs-997968" is "Ready"
	I1206 09:52:32.651333  771042 pod_ready.go:86] duration metric: took 399.807696ms for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:52:32.651350  771042 pod_ready.go:40] duration metric: took 1.606005846s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:32.715347  771042 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:52:32.717209  771042 out.go:179] * Done! kubectl is now configured to use "embed-certs-997968" cluster and "default" namespace by default
	I1206 09:52:31.494726  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.496408  782026 addons.go:239] Setting addon default-storageclass=true in "no-preload-521770"
	W1206 09:52:31.496441  782026 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:52:31.496486  782026 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:52:31.496950  782026 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:52:31.497158  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:52:31.497175  782026 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:52:31.497220  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.525895  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.525995  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.531373  782026 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:31.531497  782026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:52:31.531646  782026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:52:31.558891  782026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:52:31.617244  782026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:31.631670  782026 node_ready.go:35] waiting up to 6m0s for node "no-preload-521770" to be "Ready" ...
	I1206 09:52:31.637913  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:31.640598  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:52:31.640621  782026 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:52:31.655712  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:52:31.655739  782026 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:52:31.665130  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:31.670828  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:52:31.670852  782026 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:52:31.684031  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:52:31.684058  782026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:52:31.699299  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:52:31.699328  782026 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:52:31.715252  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:52:31.715293  782026 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:52:31.731123  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:52:31.731152  782026 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:52:31.746870  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:52:31.746901  782026 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:52:31.764204  782026 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:52:31.764245  782026 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:52:31.778466  782026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:52:32.765763  782026 node_ready.go:49] node "no-preload-521770" is "Ready"
	I1206 09:52:32.765807  782026 node_ready.go:38] duration metric: took 1.13410262s for node "no-preload-521770" to be "Ready" ...
	I1206 09:52:32.765825  782026 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:32.765878  782026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:33.377502  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.739521443s)
	I1206 09:52:33.377555  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.712396791s)
	I1206 09:52:33.377698  782026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.599185729s)
	I1206 09:52:33.377732  782026 api_server.go:72] duration metric: took 1.909066219s to wait for apiserver process to appear ...
	I1206 09:52:33.377745  782026 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:33.377766  782026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:52:33.379186  782026 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-521770 addons enable metrics-server
	
	I1206 09:52:33.382540  782026 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:52:33.382566  782026 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:52:33.384754  782026 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:52:32.228887  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:32.728657  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.228203  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.728879  778743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:52:33.831071  778743 kubeadm.go:1114] duration metric: took 4.699527608s to wait for elevateKubeSystemPrivileges
	I1206 09:52:33.831115  778743 kubeadm.go:403] duration metric: took 12.354833253s to StartCluster
	I1206 09:52:33.831139  778743 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:33.831222  778743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:52:33.834301  778743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:33.835103  778743 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:52:33.835251  778743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:52:33.835593  778743 config.go:182] Loaded profile config "newest-cni-641599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:33.836022  778743 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:52:33.836151  778743 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-641599"
	I1206 09:52:33.836171  778743 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-641599"
	I1206 09:52:33.836215  778743 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:52:33.836254  778743 addons.go:70] Setting default-storageclass=true in profile "newest-cni-641599"
	I1206 09:52:33.836283  778743 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-641599"
	I1206 09:52:33.836675  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.836819  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.836996  778743 out.go:179] * Verifying Kubernetes components...
	I1206 09:52:33.838578  778743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:33.865836  778743 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:33.385762  782026 addons.go:530] duration metric: took 1.917092445s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:52:33.877868  782026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:52:33.892187  782026 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:52:33.892234  782026 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:52:33.866996  778743 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:33.867020  778743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:52:33.867084  778743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-641599
	I1206 09:52:33.867713  778743 addons.go:239] Setting addon default-storageclass=true in "newest-cni-641599"
	I1206 09:52:33.867760  778743 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:52:33.868266  778743 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:52:33.901090  778743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/newest-cni-641599/id_rsa Username:docker}
	I1206 09:52:33.902303  778743 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:33.902332  778743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:52:33.902406  778743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-641599
	I1206 09:52:33.931188  778743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/newest-cni-641599/id_rsa Username:docker}
	I1206 09:52:33.956142  778743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:52:34.000905  778743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:34.019354  778743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:52:34.066170  778743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:52:34.207346  778743 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:52:34.209353  778743 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:52:34.209427  778743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:52:34.384953  778743 api_server.go:72] duration metric: took 549.802737ms to wait for apiserver process to appear ...
	I1206 09:52:34.384982  778743 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:52:34.385002  778743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:52:34.390257  778743 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 09:52:34.391261  778743 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:52:34.391294  778743 api_server.go:131] duration metric: took 6.303663ms to wait for apiserver health ...
	I1206 09:52:34.391304  778743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:34.393369  778743 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:52:34.394798  778743 addons.go:530] duration metric: took 559.141528ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:52:34.395190  778743 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:34.395225  778743 system_pods.go:61] "coredns-7d764666f9-8njm9" [97429e74-14c2-47b6-aecd-8b863a997474] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:52:34.395242  778743 system_pods.go:61] "etcd-newest-cni-641599" [ca0d2519-e026-4dee-a3fb-ce7df13ee8fc] Running
	I1206 09:52:34.395255  778743 system_pods.go:61] "kindnet-kv2gc" [0f27b79f-29eb-4e3e-9a65-fbc2529e4f09] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:52:34.395262  778743 system_pods.go:61] "kube-apiserver-newest-cni-641599" [40559cd7-889e-49dd-9f65-0b5e9a543dc2] Running
	I1206 09:52:34.395272  778743 system_pods.go:61] "kube-controller-manager-newest-cni-641599" [a609bb41-7a15-4452-9140-a79c35a026c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:52:34.395294  778743 system_pods.go:61] "kube-proxy-fv54r" [b74c4162-c9cd-43a6-9a4a-2162b2899489] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:52:34.395304  778743 system_pods.go:61] "kube-scheduler-newest-cni-641599" [81daab83-11f5-44cb-982c-212001fe43a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:52:34.395309  778743 system_pods.go:61] "storage-provisioner" [4de61ac3-6403-4c30-9cea-246b6f8bc458] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:52:34.395319  778743 system_pods.go:74] duration metric: took 4.008824ms to wait for pod list to return data ...
	I1206 09:52:34.395326  778743 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:34.397781  778743 default_sa.go:45] found service account: "default"
	I1206 09:52:34.397804  778743 default_sa.go:55] duration metric: took 2.472195ms for default service account to be created ...
	I1206 09:52:34.397819  778743 kubeadm.go:587] duration metric: took 562.671247ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:52:34.397836  778743 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:34.442287  778743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:34.442320  778743 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:34.442334  778743 node_conditions.go:105] duration metric: took 44.493558ms to run NodePressure ...
	I1206 09:52:34.442347  778743 start.go:242] waiting for startup goroutines ...
	I1206 09:52:34.714855  778743 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-641599" context rescaled to 1 replicas
	I1206 09:52:34.714897  778743 start.go:247] waiting for cluster config update ...
	I1206 09:52:34.714911  778743 start.go:256] writing updated cluster config ...
	I1206 09:52:34.715173  778743 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:34.769780  778743 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:52:34.772739  778743 out.go:179] * Done! kubectl is now configured to use "newest-cni-641599" cluster and "default" namespace by default
	I1206 09:52:34.378648  782026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:52:34.384028  782026 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1206 09:52:34.385581  782026 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:52:34.385610  782026 api_server.go:131] duration metric: took 1.007857713s to wait for apiserver health ...
	I1206 09:52:34.385622  782026 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:52:34.390034  782026 system_pods.go:59] 8 kube-system pods found
	I1206 09:52:34.390087  782026 system_pods.go:61] "coredns-7d764666f9-mhwh5" [a8d7204c-9d11-4944-bc37-a5788a67aaab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:34.390100  782026 system_pods.go:61] "etcd-no-preload-521770" [70631f4e-162f-4705-8a60-a85268dc3dcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:52:34.390117  782026 system_pods.go:61] "kindnet-2w8b5" [6fd87fa0-c550-4070-86fc-32b4938f35da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:52:34.390136  782026 system_pods.go:61] "kube-apiserver-no-preload-521770" [161ebc70-6169-4cac-80c1-74ac9a873e0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:52:34.390152  782026 system_pods.go:61] "kube-controller-manager-no-preload-521770" [56d3cb3d-a16d-4403-a714-b61ef6ee324c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:52:34.390166  782026 system_pods.go:61] "kube-proxy-t7vrx" [e4a78bfd-8025-45f5-94fa-116ef311de94] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:52:34.390181  782026 system_pods.go:61] "kube-scheduler-no-preload-521770" [ab126d16-4ccb-4e6a-bedb-412cc082844f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:52:34.390188  782026 system_pods.go:61] "storage-provisioner" [6be872af-41f0-4aae-adf9-40313b511c3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:34.390195  782026 system_pods.go:74] duration metric: took 4.566571ms to wait for pod list to return data ...
	I1206 09:52:34.390204  782026 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:52:34.393718  782026 default_sa.go:45] found service account: "default"
	I1206 09:52:34.393741  782026 default_sa.go:55] duration metric: took 3.531222ms for default service account to be created ...
	I1206 09:52:34.393750  782026 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:52:34.396513  782026 system_pods.go:86] 8 kube-system pods found
	I1206 09:52:34.396543  782026 system_pods.go:89] "coredns-7d764666f9-mhwh5" [a8d7204c-9d11-4944-bc37-a5788a67aaab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:52:34.396554  782026 system_pods.go:89] "etcd-no-preload-521770" [70631f4e-162f-4705-8a60-a85268dc3dcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:52:34.396570  782026 system_pods.go:89] "kindnet-2w8b5" [6fd87fa0-c550-4070-86fc-32b4938f35da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:52:34.396584  782026 system_pods.go:89] "kube-apiserver-no-preload-521770" [161ebc70-6169-4cac-80c1-74ac9a873e0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:52:34.396596  782026 system_pods.go:89] "kube-controller-manager-no-preload-521770" [56d3cb3d-a16d-4403-a714-b61ef6ee324c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:52:34.396609  782026 system_pods.go:89] "kube-proxy-t7vrx" [e4a78bfd-8025-45f5-94fa-116ef311de94] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:52:34.396617  782026 system_pods.go:89] "kube-scheduler-no-preload-521770" [ab126d16-4ccb-4e6a-bedb-412cc082844f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:52:34.396627  782026 system_pods.go:89] "storage-provisioner" [6be872af-41f0-4aae-adf9-40313b511c3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:52:34.396646  782026 system_pods.go:126] duration metric: took 2.889598ms to wait for k8s-apps to be running ...
	I1206 09:52:34.396656  782026 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:52:34.396711  782026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:34.410212  782026 system_svc.go:56] duration metric: took 13.547556ms WaitForService to wait for kubelet
	I1206 09:52:34.410238  782026 kubeadm.go:587] duration metric: took 2.941573619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:34.410257  782026 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:52:34.413022  782026 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:52:34.413045  782026 node_conditions.go:123] node cpu capacity is 8
	I1206 09:52:34.413059  782026 node_conditions.go:105] duration metric: took 2.797665ms to run NodePressure ...
	I1206 09:52:34.413071  782026 start.go:242] waiting for startup goroutines ...
	I1206 09:52:34.413077  782026 start.go:247] waiting for cluster config update ...
	I1206 09:52:34.413093  782026 start.go:256] writing updated cluster config ...
	I1206 09:52:34.413304  782026 ssh_runner.go:195] Run: rm -f paused
	I1206 09:52:34.417145  782026 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:52:34.489347  782026 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mhwh5" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:52:36.494894  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	W1206 09:52:38.495796  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 06 09:52:29 embed-certs-997968 crio[775]: time="2025-12-06T09:52:29.669121041Z" level=info msg="Started container" PID=1872 containerID=e6ad5b4c3c44c81875f5f44f11fcab46c8226cd7a787fe87238011366f99f543 description=kube-system/coredns-66bc5c9577-kw8nl/coredns id=90d1ec96-f5b8-4207-85c8-2f30a943b06e name=/runtime.v1.RuntimeService/StartContainer sandboxID=b65758246a10a4da7eb914e8642d8c03ed2150d6fe9559c1eaa590c8e782a61d
	Dec 06 09:52:29 embed-certs-997968 crio[775]: time="2025-12-06T09:52:29.669891126Z" level=info msg="Started container" PID=1871 containerID=fc1bc580d4cc192b30ac7d7423f44d5b0a5d795132eaa127ca3b29ab82737691 description=kube-system/storage-provisioner/storage-provisioner id=388411d2-2c81-43ef-8ae9-dde550b9d375 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69b6fa6fda826b5beef35dfc5b4ef727a31a50135ed6dc9a825c1ad278c5875d
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.252841243Z" level=info msg="Running pod sandbox: default/busybox/POD" id=479c8257-d270-4489-bb26-2bffb6745b2d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.252941964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.259121712Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:45693955bcfb8d2947ddf77bccd8eca496ebb931890e9fc56ce3de103437701a UID:572be28a-1a60-48d5-95e5-a5355b5493ee NetNS:/var/run/netns/f7e4c362-3d60-4001-964c-bd6998a7f6b3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006807d8}] Aliases:map[]}"
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.259166472Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.272170191Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:45693955bcfb8d2947ddf77bccd8eca496ebb931890e9fc56ce3de103437701a UID:572be28a-1a60-48d5-95e5-a5355b5493ee NetNS:/var/run/netns/f7e4c362-3d60-4001-964c-bd6998a7f6b3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006807d8}] Aliases:map[]}"
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.272375935Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.273365672Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.274589459Z" level=info msg="Ran pod sandbox 45693955bcfb8d2947ddf77bccd8eca496ebb931890e9fc56ce3de103437701a with infra container: default/busybox/POD" id=479c8257-d270-4489-bb26-2bffb6745b2d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.276205265Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d5f33570-4d2d-49a7-b7ce-6eddc58b61e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.276368976Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d5f33570-4d2d-49a7-b7ce-6eddc58b61e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.276420672Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d5f33570-4d2d-49a7-b7ce-6eddc58b61e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.277261584Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a576020b-e238-48dc-84a3-60452e05df09 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:52:33 embed-certs-997968 crio[775]: time="2025-12-06T09:52:33.279221439Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.430273764Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a576020b-e238-48dc-84a3-60452e05df09 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.431206878Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9a1bc0f0-f21c-442f-92cc-b97d08c8e6d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.435578571Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6835c003-75a7-41d0-8f00-f014552f3aad name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.439447477Z" level=info msg="Creating container: default/busybox/busybox" id=6199c048-3bf4-4739-af02-599290dd6d71 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.439627495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.444828973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.445333898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.472424819Z" level=info msg="Created container c2bbffcaeee3764a9918f82d8274aea06cae089009e1d9ce5e9fa3fdbaa2b090: default/busybox/busybox" id=6199c048-3bf4-4739-af02-599290dd6d71 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.473157977Z" level=info msg="Starting container: c2bbffcaeee3764a9918f82d8274aea06cae089009e1d9ce5e9fa3fdbaa2b090" id=8ece2417-db05-4322-99f2-da96d46fa8f4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:35 embed-certs-997968 crio[775]: time="2025-12-06T09:52:35.475099897Z" level=info msg="Started container" PID=1946 containerID=c2bbffcaeee3764a9918f82d8274aea06cae089009e1d9ce5e9fa3fdbaa2b090 description=default/busybox/busybox id=8ece2417-db05-4322-99f2-da96d46fa8f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45693955bcfb8d2947ddf77bccd8eca496ebb931890e9fc56ce3de103437701a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c2bbffcaeee37       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   45693955bcfb8       busybox                                      default
	e6ad5b4c3c44c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   b65758246a10a       coredns-66bc5c9577-kw8nl                     kube-system
	fc1bc580d4cc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   69b6fa6fda826       storage-provisioner                          kube-system
	18e288c99d3ec       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      25 seconds ago      Running             kube-proxy                0                   10aa98d8836cf       kube-proxy-m2zpr                             kube-system
	9f21a5d9b460a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   c62da0642626f       kindnet-f84xr                                kube-system
	35b8bb6690ab1       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      36 seconds ago      Running             kube-controller-manager   0                   b825bcb53106a       kube-controller-manager-embed-certs-997968   kube-system
	e6b1306389b67       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      36 seconds ago      Running             kube-apiserver            0                   168960f88f625       kube-apiserver-embed-certs-997968            kube-system
	b6882c438d81e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      36 seconds ago      Running             kube-scheduler            0                   24c5e034ed38f       kube-scheduler-embed-certs-997968            kube-system
	f74cf0f046be3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   e0cff0e859786       etcd-embed-certs-997968                      kube-system
	
	
	==> coredns [e6ad5b4c3c44c81875f5f44f11fcab46c8226cd7a787fe87238011366f99f543] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53407 - 8529 "HINFO IN 6935086369698799071.8230583171828144865. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040040416s
	
	
	==> describe nodes <==
	Name:               embed-certs-997968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-997968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=embed-certs-997968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-997968
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:52:43 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:52:43 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:52:43 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:52:43 +0000   Sat, 06 Dec 2025 09:52:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-997968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                39095a07-7a66-4c4f-9c45-34915880419b
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-kw8nl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-997968                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-f84xr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-997968             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-997968    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-m2zpr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-997968             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node embed-certs-997968 event: Registered Node embed-certs-997968 in Controller
	  Normal  NodeReady                14s                kubelet          Node embed-certs-997968 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [f74cf0f046be3557df53a986ba22100093f0d3cee10380baa041a63e5e0f0d8b] <==
	{"level":"warn","ts":"2025-12-06T09:52:09.044078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.052738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.062588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.070199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.082586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.087891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.096368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.107207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.117837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.126235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.146303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.154138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.163483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.177624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.187514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.198379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:09.273701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51466","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:52:15.934426Z","caller":"traceutil/trace.go:172","msg":"trace[727227260] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"108.090046ms","start":"2025-12-06T09:52:15.826314Z","end":"2025-12-06T09:52:15.934404Z","steps":["trace[727227260] 'process raft request'  (duration: 107.93184ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:52:16.290523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.947903ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597539356295958 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" value_size:129 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:52:16.290635Z","caller":"traceutil/trace.go:172","msg":"trace[1896387410] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"164.372392ms","start":"2025-12-06T09:52:16.126249Z","end":"2025-12-06T09:52:16.290621Z","steps":["trace[1896387410] 'process raft request'  (duration: 36.896564ms)","trace[1896387410] 'compare'  (duration: 126.836584ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:52:16.699397Z","caller":"traceutil/trace.go:172","msg":"trace[1514276099] linearizableReadLoop","detail":"{readStateIndex:343; appliedIndex:343; }","duration":"103.032242ms","start":"2025-12-06T09:52:16.596341Z","end":"2025-12-06T09:52:16.699373Z","steps":["trace[1514276099] 'read index received'  (duration: 103.021944ms)","trace[1514276099] 'applied index is now lower than readState.Index'  (duration: 8.914µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:52:16.699554Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.19587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:52:16.699586Z","caller":"traceutil/trace.go:172","msg":"trace[1345666363] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:331; }","duration":"103.249258ms","start":"2025-12-06T09:52:16.596330Z","end":"2025-12-06T09:52:16.699579Z","steps":["trace[1345666363] 'agreement among raft nodes before linearized reading'  (duration: 103.153627ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:52:16.699676Z","caller":"traceutil/trace.go:172","msg":"trace[1364454709] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"123.702683ms","start":"2025-12-06T09:52:16.575957Z","end":"2025-12-06T09:52:16.699660Z","steps":["trace[1364454709] 'process raft request'  (duration: 123.468202ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:52:16.925452Z","caller":"traceutil/trace.go:172","msg":"trace[1750719346] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"139.796062ms","start":"2025-12-06T09:52:16.785638Z","end":"2025-12-06T09:52:16.925434Z","steps":["trace[1750719346] 'process raft request'  (duration: 139.574409ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:52:43 up  2:35,  0 user,  load average: 5.09, 3.12, 3.30
	Linux embed-certs-997968 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f21a5d9b460a489c3536ca3273ba94027cf8660bca645f618bfcf8f9485fedf] <==
	I1206 09:52:18.542425       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:52:18.542778       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:52:18.542916       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:52:18.542936       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:52:18.542967       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:52:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:52:18.745246       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:52:18.745277       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:52:18.745287       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:52:18.746031       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:52:19.145378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:52:19.145407       1 metrics.go:72] Registering metrics
	I1206 09:52:19.145494       1 controller.go:711] "Syncing nftables rules"
	I1206 09:52:28.746532       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:52:28.746575       1 main.go:301] handling current node
	I1206 09:52:38.747546       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:52:38.747594       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6b1306389b67455253c5d7b4e88a9c0a3dc28cacc1e0574b54c41a0d3b9d082] <==
	I1206 09:52:09.825032       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:52:09.825040       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:52:09.825046       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:52:09.839074       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:09.845268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:09.853355       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:52:10.013724       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:52:10.718676       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:52:10.722623       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:52:10.722639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:52:11.224846       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:52:11.263398       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:52:11.326232       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:52:11.334691       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1206 09:52:11.336845       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:52:11.344058       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:52:11.729815       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:52:12.520969       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:52:12.534741       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:52:12.544771       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:52:17.433253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:52:17.684924       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:17.690534       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:17.830429       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1206 09:52:42.045244       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:51524: use of closed network connection
	
	
	==> kube-controller-manager [35b8bb6690ab106bf7efd6a364723c702f7c2ce3955724d7d0cdb9f8115cdbb4] <==
	I1206 09:52:16.780206       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:52:16.781926       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:52:16.782037       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:52:16.782092       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:52:16.782127       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:52:16.782140       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:52:16.782147       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:52:16.782276       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:52:16.783590       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:52:16.783724       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:52:16.792987       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:52:16.794178       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:52:16.794215       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:52:16.795385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:52:16.795446       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:52:16.801684       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:52:16.804938       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:52:16.808271       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:52:16.808477       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:52:16.828098       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:52:16.828113       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:52:16.828119       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:52:16.830360       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:52:16.927323       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-997968" podCIDRs=["10.244.0.0/24"]
	I1206 09:52:31.744500       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [18e288c99d3ece786f69c7c13cdce6392bd2beb95279f901048c59983f15bb6a] <==
	I1206 09:52:18.355769       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:52:18.434887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:52:18.535909       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:52:18.535979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 09:52:18.536107       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:52:18.556382       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:52:18.556481       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:52:18.562448       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:52:18.562812       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:52:18.562843       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:52:18.564705       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:52:18.564918       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:52:18.564959       1 config.go:200] "Starting service config controller"
	I1206 09:52:18.566046       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:52:18.565021       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:52:18.566101       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:52:18.566220       1 config.go:309] "Starting node config controller"
	I1206 09:52:18.566243       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:52:18.566249       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:52:18.665450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:52:18.666657       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:52:18.666697       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b6882c438d81e7c071617c9ca22a1d3fd0d68ef08e0774de2e639ff39d6148ab] <==
	E1206 09:52:09.786090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:52:09.786189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:52:09.786252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:52:09.786286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:52:09.786282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:52:09.786334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:52:09.786609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:52:09.786620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:52:09.785834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:52:09.786929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:52:09.786988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:52:10.604724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:52:10.628919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:52:10.717394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:52:10.726607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:52:10.830660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:52:10.851967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:52:10.868583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:52:10.906318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:52:10.926631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:52:10.958911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:52:10.963252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:52:10.989819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:52:11.065150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1206 09:52:13.780299       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:52:13 embed-certs-997968 kubelet[1341]: E1206 09:52:13.501539    1341 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-997968\" already exists" pod="kube-system/kube-apiserver-embed-certs-997968"
	Dec 06 09:52:13 embed-certs-997968 kubelet[1341]: I1206 09:52:13.523305    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-997968" podStartSLOduration=1.5232819659999999 podStartE2EDuration="1.523281966s" podCreationTimestamp="2025-12-06 09:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:13.513771407 +0000 UTC m=+1.166834295" watchObservedRunningTime="2025-12-06 09:52:13.523281966 +0000 UTC m=+1.176344854"
	Dec 06 09:52:13 embed-certs-997968 kubelet[1341]: I1206 09:52:13.532984    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-997968" podStartSLOduration=1.532961611 podStartE2EDuration="1.532961611s" podCreationTimestamp="2025-12-06 09:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:13.523473611 +0000 UTC m=+1.176536498" watchObservedRunningTime="2025-12-06 09:52:13.532961611 +0000 UTC m=+1.186024500"
	Dec 06 09:52:13 embed-certs-997968 kubelet[1341]: I1206 09:52:13.533179    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-997968" podStartSLOduration=2.533165218 podStartE2EDuration="2.533165218s" podCreationTimestamp="2025-12-06 09:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:13.532963069 +0000 UTC m=+1.186025958" watchObservedRunningTime="2025-12-06 09:52:13.533165218 +0000 UTC m=+1.186228121"
	Dec 06 09:52:13 embed-certs-997968 kubelet[1341]: I1206 09:52:13.552143    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-997968" podStartSLOduration=1.552120159 podStartE2EDuration="1.552120159s" podCreationTimestamp="2025-12-06 09:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:13.542632349 +0000 UTC m=+1.195695237" watchObservedRunningTime="2025-12-06 09:52:13.552120159 +0000 UTC m=+1.205183050"
	Dec 06 09:52:16 embed-certs-997968 kubelet[1341]: I1206 09:52:16.990898    1341 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:52:16 embed-certs-997968 kubelet[1341]: I1206 09:52:16.991774    1341 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.870982    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/323e6efb-c1dc-4444-a267-62cbeea83a87-cni-cfg\") pod \"kindnet-f84xr\" (UID: \"323e6efb-c1dc-4444-a267-62cbeea83a87\") " pod="kube-system/kindnet-f84xr"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.871040    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjgv2\" (UniqueName: \"kubernetes.io/projected/69d79892-828c-4f7a-b513-947e20961afe-kube-api-access-kjgv2\") pod \"kube-proxy-m2zpr\" (UID: \"69d79892-828c-4f7a-b513-947e20961afe\") " pod="kube-system/kube-proxy-m2zpr"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.871066    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/323e6efb-c1dc-4444-a267-62cbeea83a87-xtables-lock\") pod \"kindnet-f84xr\" (UID: \"323e6efb-c1dc-4444-a267-62cbeea83a87\") " pod="kube-system/kindnet-f84xr"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.871095    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/323e6efb-c1dc-4444-a267-62cbeea83a87-lib-modules\") pod \"kindnet-f84xr\" (UID: \"323e6efb-c1dc-4444-a267-62cbeea83a87\") " pod="kube-system/kindnet-f84xr"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.871119    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/69d79892-828c-4f7a-b513-947e20961afe-kube-proxy\") pod \"kube-proxy-m2zpr\" (UID: \"69d79892-828c-4f7a-b513-947e20961afe\") " pod="kube-system/kube-proxy-m2zpr"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.871179    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69d79892-828c-4f7a-b513-947e20961afe-xtables-lock\") pod \"kube-proxy-m2zpr\" (UID: \"69d79892-828c-4f7a-b513-947e20961afe\") " pod="kube-system/kube-proxy-m2zpr"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.871218    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69d79892-828c-4f7a-b513-947e20961afe-lib-modules\") pod \"kube-proxy-m2zpr\" (UID: \"69d79892-828c-4f7a-b513-947e20961afe\") " pod="kube-system/kube-proxy-m2zpr"
	Dec 06 09:52:17 embed-certs-997968 kubelet[1341]: I1206 09:52:17.871249    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2t68\" (UniqueName: \"kubernetes.io/projected/323e6efb-c1dc-4444-a267-62cbeea83a87-kube-api-access-k2t68\") pod \"kindnet-f84xr\" (UID: \"323e6efb-c1dc-4444-a267-62cbeea83a87\") " pod="kube-system/kindnet-f84xr"
	Dec 06 09:52:18 embed-certs-997968 kubelet[1341]: I1206 09:52:18.512877    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2zpr" podStartSLOduration=1.512854055 podStartE2EDuration="1.512854055s" podCreationTimestamp="2025-12-06 09:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:18.512376028 +0000 UTC m=+6.165438946" watchObservedRunningTime="2025-12-06 09:52:18.512854055 +0000 UTC m=+6.165916950"
	Dec 06 09:52:18 embed-certs-997968 kubelet[1341]: I1206 09:52:18.522628    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f84xr" podStartSLOduration=1.522605306 podStartE2EDuration="1.522605306s" podCreationTimestamp="2025-12-06 09:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:18.522364973 +0000 UTC m=+6.175427862" watchObservedRunningTime="2025-12-06 09:52:18.522605306 +0000 UTC m=+6.175668195"
	Dec 06 09:52:29 embed-certs-997968 kubelet[1341]: I1206 09:52:29.266325    1341 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:52:29 embed-certs-997968 kubelet[1341]: I1206 09:52:29.347042    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a588cb47-54de-454f-801b-111a581192ad-config-volume\") pod \"coredns-66bc5c9577-kw8nl\" (UID: \"a588cb47-54de-454f-801b-111a581192ad\") " pod="kube-system/coredns-66bc5c9577-kw8nl"
	Dec 06 09:52:29 embed-certs-997968 kubelet[1341]: I1206 09:52:29.347085    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvbv\" (UniqueName: \"kubernetes.io/projected/9f02a7ce-95cb-4187-936a-e77551b1afb8-kube-api-access-vrvbv\") pod \"storage-provisioner\" (UID: \"9f02a7ce-95cb-4187-936a-e77551b1afb8\") " pod="kube-system/storage-provisioner"
	Dec 06 09:52:29 embed-certs-997968 kubelet[1341]: I1206 09:52:29.347107    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzmq\" (UniqueName: \"kubernetes.io/projected/a588cb47-54de-454f-801b-111a581192ad-kube-api-access-2dzmq\") pod \"coredns-66bc5c9577-kw8nl\" (UID: \"a588cb47-54de-454f-801b-111a581192ad\") " pod="kube-system/coredns-66bc5c9577-kw8nl"
	Dec 06 09:52:29 embed-certs-997968 kubelet[1341]: I1206 09:52:29.347125    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9f02a7ce-95cb-4187-936a-e77551b1afb8-tmp\") pod \"storage-provisioner\" (UID: \"9f02a7ce-95cb-4187-936a-e77551b1afb8\") " pod="kube-system/storage-provisioner"
	Dec 06 09:52:30 embed-certs-997968 kubelet[1341]: I1206 09:52:30.542117    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.542093775 podStartE2EDuration="12.542093775s" podCreationTimestamp="2025-12-06 09:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:30.541946495 +0000 UTC m=+18.195009387" watchObservedRunningTime="2025-12-06 09:52:30.542093775 +0000 UTC m=+18.195156665"
	Dec 06 09:52:32 embed-certs-997968 kubelet[1341]: I1206 09:52:32.944872    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kw8nl" podStartSLOduration=15.944842891 podStartE2EDuration="15.944842891s" podCreationTimestamp="2025-12-06 09:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:52:30.557121116 +0000 UTC m=+18.210184007" watchObservedRunningTime="2025-12-06 09:52:32.944842891 +0000 UTC m=+20.597905780"
	Dec 06 09:52:32 embed-certs-997968 kubelet[1341]: I1206 09:52:32.970110    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjcwc\" (UniqueName: \"kubernetes.io/projected/572be28a-1a60-48d5-95e5-a5355b5493ee-kube-api-access-gjcwc\") pod \"busybox\" (UID: \"572be28a-1a60-48d5-95e5-a5355b5493ee\") " pod="default/busybox"
	
	
	==> storage-provisioner [fc1bc580d4cc192b30ac7d7423f44d5b0a5d795132eaa127ca3b29ab82737691] <==
	I1206 09:52:29.683763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:52:29.693753       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:52:29.693809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:52:29.695983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:29.701357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:52:29.701613       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:52:29.701679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a3232d1c-1b95-4b7b-ae4c-725079989772", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-997968_aaf04c58-4b15-4cb7-89f7-f6f08a919286 became leader
	I1206 09:52:29.701823       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-997968_aaf04c58-4b15-4cb7-89f7-f6f08a919286!
	W1206 09:52:29.703858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:29.708738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:52:29.802340       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-997968_aaf04c58-4b15-4cb7-89f7-f6f08a919286!
	W1206 09:52:31.712375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:31.722100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:33.725232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:33.729166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:35.731833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:35.736660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:37.739869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:37.744180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:39.747755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:39.755590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:41.759571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:41.764030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:43.767417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:43.770957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997968 -n embed-certs-997968
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-997968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-641599 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-641599 --alsologtostderr -v=1: exit status 80 (1.981470269s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-641599 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:53:01.287646  792337 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:01.287969  792337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:01.287981  792337 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:01.287985  792337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:01.288253  792337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:01.288525  792337 out.go:368] Setting JSON to false
	I1206 09:53:01.288547  792337 mustload.go:66] Loading cluster: newest-cni-641599
	I1206 09:53:01.288946  792337 config.go:182] Loaded profile config "newest-cni-641599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:53:01.289480  792337 cli_runner.go:164] Run: docker container inspect newest-cni-641599 --format={{.State.Status}}
	I1206 09:53:01.309614  792337 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:53:01.309937  792337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:01.377643  792337 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:01.366063318 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:01.378329  792337 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-641599 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:53:01.380019  792337 out.go:179] * Pausing node newest-cni-641599 ... 
	I1206 09:53:01.380969  792337 host.go:66] Checking if "newest-cni-641599" exists ...
	I1206 09:53:01.381233  792337 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:01.381280  792337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-641599
	I1206 09:53:01.401326  792337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33216 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/newest-cni-641599/id_rsa Username:docker}
	I1206 09:53:01.502767  792337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:01.521974  792337 pause.go:52] kubelet running: true
	I1206 09:53:01.522051  792337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:01.739311  792337 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:01.739532  792337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:01.856372  792337 cri.go:89] found id: "541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c"
	I1206 09:53:01.856404  792337 cri.go:89] found id: "2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab"
	I1206 09:53:01.856417  792337 cri.go:89] found id: "a1efa0049a5e41389449d11959e86aeb1bb15165128f6d01381c3dfa79fa66d1"
	I1206 09:53:01.856421  792337 cri.go:89] found id: "a2224fd9810ea369e75063d82334ae15de55447aae16a68e1d51d07fb3fd2529"
	I1206 09:53:01.856426  792337 cri.go:89] found id: "c443bd6749568678aa996c237f5b1ab8eed91498aad2fde0c70028e891a8345d"
	I1206 09:53:01.856431  792337 cri.go:89] found id: "abc6fabf45d2cdd69c8b919ddced2c3b7dd69d0310045183df185cf3d07b284c"
	I1206 09:53:01.856436  792337 cri.go:89] found id: ""
	I1206 09:53:01.856519  792337 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:01.882899  792337 retry.go:31] will retry after 263.831631ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:01Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:02.147402  792337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:02.176660  792337 pause.go:52] kubelet running: false
	I1206 09:53:02.176746  792337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:02.379644  792337 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:02.379873  792337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:02.477580  792337 cri.go:89] found id: "541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c"
	I1206 09:53:02.477610  792337 cri.go:89] found id: "2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab"
	I1206 09:53:02.477617  792337 cri.go:89] found id: "a1efa0049a5e41389449d11959e86aeb1bb15165128f6d01381c3dfa79fa66d1"
	I1206 09:53:02.477623  792337 cri.go:89] found id: "a2224fd9810ea369e75063d82334ae15de55447aae16a68e1d51d07fb3fd2529"
	I1206 09:53:02.477627  792337 cri.go:89] found id: "c443bd6749568678aa996c237f5b1ab8eed91498aad2fde0c70028e891a8345d"
	I1206 09:53:02.477637  792337 cri.go:89] found id: "abc6fabf45d2cdd69c8b919ddced2c3b7dd69d0310045183df185cf3d07b284c"
	I1206 09:53:02.477641  792337 cri.go:89] found id: ""
	I1206 09:53:02.477688  792337 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:02.495654  792337 retry.go:31] will retry after 449.157908ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:02Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:02.946068  792337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:02.961903  792337 pause.go:52] kubelet running: false
	I1206 09:53:02.961981  792337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:03.099542  792337 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:03.099628  792337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:03.178226  792337 cri.go:89] found id: "541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c"
	I1206 09:53:03.178251  792337 cri.go:89] found id: "2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab"
	I1206 09:53:03.178256  792337 cri.go:89] found id: "a1efa0049a5e41389449d11959e86aeb1bb15165128f6d01381c3dfa79fa66d1"
	I1206 09:53:03.178259  792337 cri.go:89] found id: "a2224fd9810ea369e75063d82334ae15de55447aae16a68e1d51d07fb3fd2529"
	I1206 09:53:03.178262  792337 cri.go:89] found id: "c443bd6749568678aa996c237f5b1ab8eed91498aad2fde0c70028e891a8345d"
	I1206 09:53:03.178266  792337 cri.go:89] found id: "abc6fabf45d2cdd69c8b919ddced2c3b7dd69d0310045183df185cf3d07b284c"
	I1206 09:53:03.178268  792337 cri.go:89] found id: ""
	I1206 09:53:03.178308  792337 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:03.192956  792337 out.go:203] 
	W1206 09:53:03.194221  792337 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:53:03.194246  792337 out.go:285] * 
	* 
	W1206 09:53:03.198308  792337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:53:03.199385  792337 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-641599 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-641599
helpers_test.go:243: (dbg) docker inspect newest-cni-641599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9",
	        "Created": "2025-12-06T09:52:17.019711231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:50.242025871Z",
	            "FinishedAt": "2025-12-06T09:52:49.24894271Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/hostname",
	        "HostsPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/hosts",
	        "LogPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9-json.log",
	        "Name": "/newest-cni-641599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-641599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-641599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9",
	                "LowerDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-641599",
	                "Source": "/var/lib/docker/volumes/newest-cni-641599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-641599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-641599",
	                "name.minikube.sigs.k8s.io": "newest-cni-641599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f17fb43e193397a5d46ab8ac2efcab43f96d286c6c4785fc79c8ce72db0cd8c3",
	            "SandboxKey": "/var/run/docker/netns/f17fb43e1933",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-641599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50ff6f7233794e663169427b7cb259f6e1696c2d99c914cb9ccec4f1b26d87f1",
	                    "EndpointID": "c5803da07f47fc90977fb1275da44edd28eb15ebecf4e823bb87270735e7139e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "7a:50:a8:9a:6e:a3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-641599",
	                        "1412f254e7b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599: exit status 2 (367.387599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-641599 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-641599 logs -n 25: (1.066995752s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p kubernetes-upgrade-581224                                                                                                                                                                                                                         │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-920129                                                                                                                                                                                                                      │ disable-driver-mounts-920129 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p no-preload-521770 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p stopped-upgrade-031481                                                                                                                                                                                                                            │ stopped-upgrade-031481       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:01.414495  792441 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:01.414882  792441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:01.414899  792441 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:01.414906  792441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:01.415254  792441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:01.415942  792441 out.go:368] Setting JSON to false
	I1206 09:53:01.417500  792441 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9325,"bootTime":1765005456,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:01.417574  792441 start.go:143] virtualization: kvm guest
	I1206 09:53:01.420592  792441 out.go:179] * [embed-certs-997968] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:01.421696  792441 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:01.421707  792441 notify.go:221] Checking for updates...
	I1206 09:53:01.423936  792441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:01.424937  792441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:01.425955  792441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:01.427191  792441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:01.428202  792441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:01.429962  792441 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:01.430754  792441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:01.457935  792441 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:01.458044  792441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:01.530787  792441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:01.516942305 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:01.530939  792441 docker.go:319] overlay module found
	I1206 09:53:01.533112  792441 out.go:179] * Using the docker driver based on existing profile
	I1206 09:53:01.534437  792441 start.go:309] selected driver: docker
	I1206 09:53:01.535046  792441 start.go:927] validating driver "docker" against &{Name:embed-certs-997968 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-997968 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:01.535197  792441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:01.535934  792441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:01.627014  792441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:01.612733647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:01.627677  792441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:01.627773  792441 cni.go:84] Creating CNI manager for ""
	I1206 09:53:01.627913  792441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:01.628080  792441 start.go:353] cluster config:
	{Name:embed-certs-997968 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-997968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:01.633239  792441 out.go:179] * Starting "embed-certs-997968" primary control-plane node in "embed-certs-997968" cluster
	I1206 09:53:01.635102  792441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:01.636378  792441 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:01.637376  792441 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:01.637413  792441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:01.637424  792441 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:01.637553  792441 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:01.637552  792441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:01.637566  792441 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:01.638854  792441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/config.json ...
	I1206 09:53:01.670181  792441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:01.670210  792441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:01.670225  792441 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:01.670266  792441 start.go:360] acquireMachinesLock for embed-certs-997968: {Name:mk7c2877cb98c89e47bc928a86486370b3f29019 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:01.670332  792441 start.go:364] duration metric: took 37.436µs to acquireMachinesLock for "embed-certs-997968"
	I1206 09:53:01.670355  792441 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:53:01.670369  792441 fix.go:54] fixHost starting: 
	I1206 09:53:01.670691  792441 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:53:01.698593  792441 fix.go:112] recreateIfNeeded on embed-certs-997968: state=Stopped err=<nil>
	W1206 09:53:01.698629  792441 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:53:00.539506  789560 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:00.561729  789560 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:00.566116  789560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:00.576758  789560 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:00.576926  789560 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:00.576981  789560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:00.616942  789560 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:00.616967  789560 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:00.617021  789560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:00.645840  789560 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:00.645865  789560 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:00.645874  789560 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1206 09:53:00.646064  789560 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-759696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:53:00.646172  789560 ssh_runner.go:195] Run: crio config
	I1206 09:53:00.705421  789560 cni.go:84] Creating CNI manager for ""
	I1206 09:53:00.705443  789560 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:00.705472  789560 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:00.705504  789560 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-759696 NodeName:default-k8s-diff-port-759696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:00.705668  789560 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-759696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:00.705754  789560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:00.716942  789560 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:00.717018  789560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:00.726441  789560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1206 09:53:00.743093  789560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:00.756606  789560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1206 09:53:00.772038  789560 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:00.777002  789560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:00.788851  789560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:00.888407  789560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:00.917751  789560 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696 for IP: 192.168.103.2
	I1206 09:53:00.917776  789560 certs.go:195] generating shared ca certs ...
	I1206 09:53:00.917798  789560 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:00.917967  789560 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:00.918039  789560 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:00.918056  789560 certs.go:257] generating profile certs ...
	I1206 09:53:00.918162  789560 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.key
	I1206 09:53:00.918229  789560 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key.e015ec9a
	I1206 09:53:00.918282  789560 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key
	I1206 09:53:00.918428  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:00.918488  789560 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:00.918502  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:00.918537  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:00.918571  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:00.918602  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:00.918659  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:00.919493  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:00.943334  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:00.989386  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:01.059026  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:01.088905  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1206 09:53:01.109379  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:01.131634  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:01.151301  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:53:01.171500  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:01.194195  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:01.215246  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:01.236230  789560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:01.251324  789560 ssh_runner.go:195] Run: openssl version
	I1206 09:53:01.258942  789560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.268179  789560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:01.277383  789560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.282319  789560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.282377  789560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.327693  789560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:01.338243  789560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.348691  789560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:01.358644  789560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.363364  789560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.363427  789560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.410508  789560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:01.420200  789560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.430570  789560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:01.439308  789560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.443740  789560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.443802  789560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.495718  789560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:01.508123  789560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:01.514224  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:53:01.575812  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:53:01.637624  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:53:01.698258  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:53:01.761485  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:53:01.824084  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:53:01.892830  789560 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:01.892953  789560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:01.893020  789560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:01.928212  789560 cri.go:89] found id: "49d5db0bf8c817844e681d0c272f78bea45bd7a69be93dbd6b87ce00764c41c3"
	I1206 09:53:01.928239  789560 cri.go:89] found id: "2b4e13927c1dd98b75c5d83e4aec397dc2e4749caaf7821cfac821811b1d3da7"
	I1206 09:53:01.928246  789560 cri.go:89] found id: "96bf17c21fc5ef4c1b3dca26666987c3ead355280a820de4ef784becde9de15b"
	I1206 09:53:01.928251  789560 cri.go:89] found id: "5081ea10eaf550a1552364d04b9716dd633af5964fac9bc876f2cc1e5ca71b16"
	I1206 09:53:01.928256  789560 cri.go:89] found id: ""
	I1206 09:53:01.928308  789560 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:53:01.944283  789560 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:01Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:01.944354  789560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:01.953805  789560 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:53:01.953828  789560 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:53:01.953892  789560 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:53:01.962316  789560 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:53:01.963552  789560 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-759696" does not appear in /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:01.964037  789560 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-499330/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-759696" cluster setting kubeconfig missing "default-k8s-diff-port-759696" context setting]
	I1206 09:53:01.964881  789560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:01.966873  789560 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:53:01.975781  789560 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1206 09:53:01.975817  789560 kubeadm.go:602] duration metric: took 21.980817ms to restartPrimaryControlPlane
	I1206 09:53:01.975827  789560 kubeadm.go:403] duration metric: took 83.013168ms to StartCluster
	I1206 09:53:01.975847  789560 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:01.975907  789560 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:01.977386  789560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:01.977781  789560 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:01.977976  789560 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:01.977957  789560 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:53:01.978064  789560 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-759696"
	I1206 09:53:01.978094  789560 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-759696"
	I1206 09:53:01.978098  789560 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-759696"
	W1206 09:53:01.978103  789560 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:53:01.978113  789560 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-759696"
	I1206 09:53:01.978130  789560 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:01.978129  789560 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-759696"
	I1206 09:53:01.978157  789560 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-759696"
	W1206 09:53:01.978167  789560 addons.go:248] addon dashboard should already be in state true
	I1206 09:53:01.978216  789560 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:01.978450  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:01.978623  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:01.978746  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:01.979949  789560 out.go:179] * Verifying Kubernetes components...
	I1206 09:53:01.983292  789560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:02.009657  789560 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:53:02.009657  789560 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:53:02.010135  789560 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-759696"
	W1206 09:53:02.010158  789560 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:53:02.010186  789560 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:02.010673  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:02.011101  789560 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:02.011181  789560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:02.011296  789560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:53:02.012213  789560 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.720799498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.723220818Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1021e5c2-b76a-45fd-8384-b222a72c1a67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.723591444Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=210811b3-c857-4aea-9615-0f6c9949af36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.724791004Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.725195345Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.725385833Z" level=info msg="Ran pod sandbox ad699d4f8d4b83c45466c4af7b3361a84a6500754140b44b528bae6eb5ca66de with infra container: kube-system/kube-proxy-fv54r/POD" id=1021e5c2-b76a-45fd-8384-b222a72c1a67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.726044351Z" level=info msg="Ran pod sandbox 7260abff5d2986b94cabe3c68f5b8f6abaa4948fa5c793e0b9817a8f0dc610f9 with infra container: kube-system/kindnet-kv2gc/POD" id=210811b3-c857-4aea-9615-0f6c9949af36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.726604885Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=aab56b8e-7e19-40b3-a3d4-7a449d18eb62 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.727126685Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=99fc3671-63d1-4251-82e9-4991bbc4ad2f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.72758459Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=db808d20-d7fd-4db4-9dbc-546108e1cb0a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.728009254Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d20ccde7-8784-4c18-ae89-6de3041c6f88 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.728537828Z" level=info msg="Creating container: kube-system/kube-proxy-fv54r/kube-proxy" id=681d3126-0783-428f-b8df-99db86f18ee5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.728653681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.729045969Z" level=info msg="Creating container: kube-system/kindnet-kv2gc/kindnet-cni" id=716270a7-affe-4e39-ba68-070072ded2b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.729130942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.734257828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.734895009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.734911574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.735471905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.76608309Z" level=info msg="Created container 541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c: kube-system/kindnet-kv2gc/kindnet-cni" id=716270a7-affe-4e39-ba68-070072ded2b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.766725093Z" level=info msg="Starting container: 541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c" id=05d388d7-41f7-4887-9ef9-a66ee03b4688 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.768473063Z" level=info msg="Started container" PID=1059 containerID=541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c description=kube-system/kindnet-kv2gc/kindnet-cni id=05d388d7-41f7-4887-9ef9-a66ee03b4688 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7260abff5d2986b94cabe3c68f5b8f6abaa4948fa5c793e0b9817a8f0dc610f9
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.769401795Z" level=info msg="Created container 2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab: kube-system/kube-proxy-fv54r/kube-proxy" id=681d3126-0783-428f-b8df-99db86f18ee5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.769998961Z" level=info msg="Starting container: 2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab" id=a8fb788d-450e-41c9-8573-8d74e42098fd name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.772876761Z" level=info msg="Started container" PID=1060 containerID=2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab description=kube-system/kube-proxy-fv54r/kube-proxy id=a8fb788d-450e-41c9-8573-8d74e42098fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad699d4f8d4b83c45466c4af7b3361a84a6500754140b44b528bae6eb5ca66de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	541e05d2a83b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   7260abff5d298       kindnet-kv2gc                               kube-system
	2a35d15d22772       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   ad699d4f8d4b8       kube-proxy-fv54r                            kube-system
	a1efa0049a5e4       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   02ee36e66f655       kube-apiserver-newest-cni-641599            kube-system
	a2224fd9810ea       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   f9d69f5258f23       kube-controller-manager-newest-cni-641599   kube-system
	c443bd6749568       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   e04480fe34bb8       kube-scheduler-newest-cni-641599            kube-system
	abc6fabf45d2c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   320a85234a1bb       etcd-newest-cni-641599                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-641599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-641599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=newest-cni-641599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-641599
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-641599
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                46a78757-c37e-4e88-b08d-a951bd452cce
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-641599                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-kv2gc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-641599             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-641599    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-fv54r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-641599             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  32s   node-controller  Node newest-cni-641599 event: Registered Node newest-cni-641599 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-641599 event: Registered Node newest-cni-641599 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [abc6fabf45d2cdd69c8b919ddced2c3b7dd69d0310045183df185cf3d07b284c] <==
	{"level":"warn","ts":"2025-12-06T09:52:58.273410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.281204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.290229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.297209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.304023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.310163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.316390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.322715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.328805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.335426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.342653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.349496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.355595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.362410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.370913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.377303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.384386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.398644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.404937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.411982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.428807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.434951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.441323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.447744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.493556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:53:04 up  2:35,  0 user,  load average: 4.77, 3.16, 3.31
	Linux newest-cni-641599 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c] <==
	I1206 09:52:59.960691       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:52:59.960884       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:52:59.960980       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:52:59.960998       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:52:59.961014       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:53:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:53:00.255853       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:53:00.255883       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:53:00.256067       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:53:00.256071       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:53:00.756299       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:53:00.756333       1 metrics.go:72] Registering metrics
	I1206 09:53:00.756427       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a1efa0049a5e41389449d11959e86aeb1bb15165128f6d01381c3dfa79fa66d1] <==
	I1206 09:52:58.970310       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:58.971443       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:52:58.972388       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:52:58.971601       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:52:59.011045       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:52:59.017895       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:59.017921       1 policy_source.go:248] refreshing policies
	I1206 09:52:59.029768       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:59.043410       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:52:59.064520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:52:59.066275       1 cache.go:39] Caches are synced for autoregister controller
	E1206 09:52:59.072557       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:52:59.072805       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:52:59.275362       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:52:59.300950       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:52:59.320886       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:52:59.328066       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:52:59.333872       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:52:59.364012       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.187.198"}
	I1206 09:52:59.372916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.248.93"}
	I1206 09:52:59.872889       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:53:02.421403       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:02.620658       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:02.620659       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:02.720989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a2224fd9810ea369e75063d82334ae15de55447aae16a68e1d51d07fb3fd2529] <==
	I1206 09:53:02.201154       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201177       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.200979       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201000       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201009       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201472       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.200991       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201740       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:53:02.202029       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.204692       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.206504       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.212008       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.223220       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-641599"
	I1206 09:53:02.224537       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:53:02.231069       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.241419       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.241842       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.242282       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.247322       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.247645       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.259534       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.299873       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.301117       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.301141       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:53:02.301149       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab] <==
	I1206 09:52:59.814106       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:52:59.882673       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:59.983184       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:59.983254       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 09:52:59.983363       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:53:00.006565       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:53:00.006713       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:53:00.021251       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:53:00.021734       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:53:00.021797       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:00.023188       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:53:00.023222       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:53:00.023250       1 config.go:200] "Starting service config controller"
	I1206 09:53:00.023256       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:53:00.023253       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:53:00.023268       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:53:00.023324       1 config.go:309] "Starting node config controller"
	I1206 09:53:00.023330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:53:00.023339       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:53:00.123381       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:53:00.123413       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:53:00.123501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c443bd6749568678aa996c237f5b1ab8eed91498aad2fde0c70028e891a8345d] <==
	I1206 09:52:57.265150       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:52:58.889472       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:52:58.889513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:52:58.889525       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:52:58.889534       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:52:58.940333       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:52:58.940427       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:52:58.946315       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:52:58.946402       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:58.947105       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:52:58.947199       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:52:58.962843       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1206 09:52:58.963283       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:58.963313       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:58.965004       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1206 09:52:58.963411       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:52:58.963439       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1206 09:52:58.963341       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:58.978488       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1206 09:52:58.978779       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:52:58.978821       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:58.978452       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1206 09:52:58.978615       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	I1206 09:53:00.548896       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.053059     675 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.054002     675 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.067800     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-641599\" already exists" pod="kube-system/kube-controller-manager-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.067846     675 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.076954     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-641599\" already exists" pod="kube-system/kube-scheduler-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.076989     675 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.082763     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-641599\" already exists" pod="kube-system/etcd-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.082797     675 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.091205     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-641599\" already exists" pod="kube-system/kube-apiserver-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.411022     675 apiserver.go:52] "Watching apiserver"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.415960     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-641599" containerName="kube-controller-manager"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.449060     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-641599" containerName="kube-scheduler"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.449198     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-641599" containerName="kube-apiserver"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.449423     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-641599" containerName="etcd"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.515652     675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613438     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-cni-cfg\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613494     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-lib-modules\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613590     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-xtables-lock\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613626     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b74c4162-c9cd-43a6-9a4a-2162b2899489-xtables-lock\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613715     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74c4162-c9cd-43a6-9a4a-2162b2899489-lib-modules\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:53:00 newest-cni-641599 kubelet[675]: E1206 09:53:00.028398     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-641599" containerName="kube-controller-manager"
	Dec 06 09:53:01 newest-cni-641599 kubelet[675]: E1206 09:53:01.353013     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-641599" containerName="kube-apiserver"
	Dec 06 09:53:01 newest-cni-641599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:01 newest-cni-641599 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:01 newest-cni-641599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-641599 -n newest-cni-641599
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-641599 -n newest-cni-641599: exit status 2 (343.97599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-641599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz: exit status 1 (73.358317ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-8njm9" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6smnm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-lczmz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-641599
helpers_test.go:243: (dbg) docker inspect newest-cni-641599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9",
	        "Created": "2025-12-06T09:52:17.019711231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:50.242025871Z",
	            "FinishedAt": "2025-12-06T09:52:49.24894271Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/hostname",
	        "HostsPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/hosts",
	        "LogPath": "/var/lib/docker/containers/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9/1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9-json.log",
	        "Name": "/newest-cni-641599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-641599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-641599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1412f254e7b6fe5c7636541e55f67454c59a175c310fec6fa6b62d612278aad9",
	                "LowerDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/912fd91fe02ab09879ed7acc90019514cb9028b01cdee2128c97de2ae9bc8dbd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-641599",
	                "Source": "/var/lib/docker/volumes/newest-cni-641599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-641599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-641599",
	                "name.minikube.sigs.k8s.io": "newest-cni-641599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f17fb43e193397a5d46ab8ac2efcab43f96d286c6c4785fc79c8ce72db0cd8c3",
	            "SandboxKey": "/var/run/docker/netns/f17fb43e1933",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-641599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50ff6f7233794e663169427b7cb259f6e1696c2d99c914cb9ccec4f1b26d87f1",
	                    "EndpointID": "c5803da07f47fc90977fb1275da44edd28eb15ebecf4e823bb87270735e7139e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "7a:50:a8:9a:6e:a3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-641599",
	                        "1412f254e7b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599: exit status 2 (322.163454ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-641599 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p kubernetes-upgrade-581224                                                                                                                                                                                                                         │ kubernetes-upgrade-581224    │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-507108                                                                                                                                                                                                                            │ old-k8s-version-507108       │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-920129                                                                                                                                                                                                                      │ disable-driver-mounts-920129 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:51 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:51 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p no-preload-521770 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p stopped-upgrade-031481                                                                                                                                                                                                                            │ stopped-upgrade-031481       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:01.414495  792441 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:01.414882  792441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:01.414899  792441 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:01.414906  792441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:01.415254  792441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:01.415942  792441 out.go:368] Setting JSON to false
	I1206 09:53:01.417500  792441 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9325,"bootTime":1765005456,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:01.417574  792441 start.go:143] virtualization: kvm guest
	I1206 09:53:01.420592  792441 out.go:179] * [embed-certs-997968] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:01.421696  792441 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:01.421707  792441 notify.go:221] Checking for updates...
	I1206 09:53:01.423936  792441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:01.424937  792441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:01.425955  792441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:01.427191  792441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:01.428202  792441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:01.429962  792441 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:01.430754  792441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:01.457935  792441 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:01.458044  792441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:01.530787  792441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:01.516942305 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:01.530939  792441 docker.go:319] overlay module found
	I1206 09:53:01.533112  792441 out.go:179] * Using the docker driver based on existing profile
	I1206 09:53:01.534437  792441 start.go:309] selected driver: docker
	I1206 09:53:01.535046  792441 start.go:927] validating driver "docker" against &{Name:embed-certs-997968 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-997968 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:01.535197  792441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:01.535934  792441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:01.627014  792441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:01.612733647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:01.627677  792441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:01.627773  792441 cni.go:84] Creating CNI manager for ""
	I1206 09:53:01.627913  792441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:01.628080  792441 start.go:353] cluster config:
	{Name:embed-certs-997968 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-997968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:01.633239  792441 out.go:179] * Starting "embed-certs-997968" primary control-plane node in "embed-certs-997968" cluster
	I1206 09:53:01.635102  792441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:01.636378  792441 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:01.637376  792441 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:01.637413  792441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:01.637424  792441 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:01.637553  792441 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:01.637552  792441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:01.637566  792441 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:01.638854  792441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/embed-certs-997968/config.json ...
	I1206 09:53:01.670181  792441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:01.670210  792441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:01.670225  792441 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:01.670266  792441 start.go:360] acquireMachinesLock for embed-certs-997968: {Name:mk7c2877cb98c89e47bc928a86486370b3f29019 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:01.670332  792441 start.go:364] duration metric: took 37.436µs to acquireMachinesLock for "embed-certs-997968"
	I1206 09:53:01.670355  792441 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:53:01.670369  792441 fix.go:54] fixHost starting: 
	I1206 09:53:01.670691  792441 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:53:01.698593  792441 fix.go:112] recreateIfNeeded on embed-certs-997968: state=Stopped err=<nil>
	W1206 09:53:01.698629  792441 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:53:00.539506  789560 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:00.561729  789560 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:00.566116  789560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:00.576758  789560 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:00.576926  789560 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:00.576981  789560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:00.616942  789560 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:00.616967  789560 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:00.617021  789560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:00.645840  789560 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:00.645865  789560 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:00.645874  789560 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1206 09:53:00.646064  789560 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-759696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:53:00.646172  789560 ssh_runner.go:195] Run: crio config
	I1206 09:53:00.705421  789560 cni.go:84] Creating CNI manager for ""
	I1206 09:53:00.705443  789560 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:00.705472  789560 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:00.705504  789560 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-759696 NodeName:default-k8s-diff-port-759696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:00.705668  789560 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-759696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:00.705754  789560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:00.716942  789560 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:00.717018  789560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:00.726441  789560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1206 09:53:00.743093  789560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:00.756606  789560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1206 09:53:00.772038  789560 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:00.777002  789560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:00.788851  789560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:00.888407  789560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:00.917751  789560 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696 for IP: 192.168.103.2
	I1206 09:53:00.917776  789560 certs.go:195] generating shared ca certs ...
	I1206 09:53:00.917798  789560 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:00.917967  789560 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:00.918039  789560 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:00.918056  789560 certs.go:257] generating profile certs ...
	I1206 09:53:00.918162  789560 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/client.key
	I1206 09:53:00.918229  789560 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key.e015ec9a
	I1206 09:53:00.918282  789560 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key
	I1206 09:53:00.918428  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:00.918488  789560 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:00.918502  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:00.918537  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:00.918571  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:00.918602  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:00.918659  789560 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:00.919493  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:00.943334  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:00.989386  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:01.059026  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:01.088905  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1206 09:53:01.109379  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:01.131634  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:01.151301  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/default-k8s-diff-port-759696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:53:01.171500  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:01.194195  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:01.215246  789560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:01.236230  789560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:01.251324  789560 ssh_runner.go:195] Run: openssl version
	I1206 09:53:01.258942  789560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.268179  789560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:01.277383  789560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.282319  789560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.282377  789560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:01.327693  789560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:01.338243  789560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.348691  789560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:01.358644  789560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.363364  789560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.363427  789560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:01.410508  789560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:01.420200  789560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.430570  789560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:01.439308  789560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.443740  789560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.443802  789560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:01.495718  789560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:01.508123  789560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:01.514224  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:53:01.575812  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:53:01.637624  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:53:01.698258  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:53:01.761485  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:53:01.824084  789560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:53:01.892830  789560 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-759696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-759696 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:01.892953  789560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:01.893020  789560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:01.928212  789560 cri.go:89] found id: "49d5db0bf8c817844e681d0c272f78bea45bd7a69be93dbd6b87ce00764c41c3"
	I1206 09:53:01.928239  789560 cri.go:89] found id: "2b4e13927c1dd98b75c5d83e4aec397dc2e4749caaf7821cfac821811b1d3da7"
	I1206 09:53:01.928246  789560 cri.go:89] found id: "96bf17c21fc5ef4c1b3dca26666987c3ead355280a820de4ef784becde9de15b"
	I1206 09:53:01.928251  789560 cri.go:89] found id: "5081ea10eaf550a1552364d04b9716dd633af5964fac9bc876f2cc1e5ca71b16"
	I1206 09:53:01.928256  789560 cri.go:89] found id: ""
	I1206 09:53:01.928308  789560 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:53:01.944283  789560 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:01Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:01.944354  789560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:01.953805  789560 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:53:01.953828  789560 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:53:01.953892  789560 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:53:01.962316  789560 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:53:01.963552  789560 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-759696" does not appear in /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:01.964037  789560 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-499330/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-759696" cluster setting kubeconfig missing "default-k8s-diff-port-759696" context setting]
	I1206 09:53:01.964881  789560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:01.966873  789560 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:53:01.975781  789560 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1206 09:53:01.975817  789560 kubeadm.go:602] duration metric: took 21.980817ms to restartPrimaryControlPlane
	I1206 09:53:01.975827  789560 kubeadm.go:403] duration metric: took 83.013168ms to StartCluster
	I1206 09:53:01.975847  789560 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:01.975907  789560 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:01.977386  789560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:01.977781  789560 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:01.977976  789560 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:01.977957  789560 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:53:01.978064  789560 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-759696"
	I1206 09:53:01.978094  789560 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-759696"
	I1206 09:53:01.978098  789560 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-759696"
	W1206 09:53:01.978103  789560 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:53:01.978113  789560 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-759696"
	I1206 09:53:01.978130  789560 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:01.978129  789560 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-759696"
	I1206 09:53:01.978157  789560 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-759696"
	W1206 09:53:01.978167  789560 addons.go:248] addon dashboard should already be in state true
	I1206 09:53:01.978216  789560 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:01.978450  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:01.978623  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:01.978746  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:01.979949  789560 out.go:179] * Verifying Kubernetes components...
	I1206 09:53:01.983292  789560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:02.009657  789560 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:53:02.009657  789560 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:53:02.010135  789560 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-759696"
	W1206 09:53:02.010158  789560 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:53:02.010186  789560 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:02.010673  789560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:02.011101  789560 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:02.011181  789560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:02.011296  789560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:53:02.012213  789560 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:53:02.013580  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:53:02.013692  789560 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:53:02.015065  789560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:53:02.051287  789560 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:02.051734  789560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:53:02.051825  789560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:53:02.053136  789560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33221 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:53:02.057053  789560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33221 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:53:02.089739  789560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33221 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:53:02.210402  789560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:02.234214  789560 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-759696" to be "Ready" ...
	I1206 09:53:02.260302  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:53:02.260329  789560 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:53:02.261832  789560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:02.267273  789560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:02.290985  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:53:02.291018  789560 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:53:02.311248  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:53:02.311296  789560 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:53:02.338023  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:53:02.338056  789560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:53:02.365885  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:53:02.365919  789560 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:53:02.390253  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:53:02.390281  789560 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:53:02.415448  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:53:02.415496  789560 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:53:02.434576  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:53:02.434604  789560 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:53:02.451019  789560 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:53:02.451038  789560 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:53:02.468888  789560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:53:03.588635  789560 node_ready.go:49] node "default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:03.588674  789560 node_ready.go:38] duration metric: took 1.354400814s for node "default-k8s-diff-port-759696" to be "Ready" ...
	I1206 09:53:03.588692  789560 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:53:03.588747  789560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:53:04.145597  789560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.878294809s)
	I1206 09:53:04.145723  789560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.676800475s)
	I1206 09:53:04.145596  789560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.883706975s)
	I1206 09:53:04.145886  789560 api_server.go:72] duration metric: took 2.168058376s to wait for apiserver process to appear ...
	I1206 09:53:04.145895  789560 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:53:04.145914  789560 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1206 09:53:04.150122  789560 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-759696 addons enable metrics-server
	
	I1206 09:53:04.150841  789560 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:53:04.150867  789560 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:53:04.154020  789560 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1206 09:53:01.002293  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	W1206 09:53:03.496912  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	I1206 09:53:04.154903  789560 addons.go:530] duration metric: took 2.176954495s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.720799498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.723220818Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1021e5c2-b76a-45fd-8384-b222a72c1a67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.723591444Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=210811b3-c857-4aea-9615-0f6c9949af36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.724791004Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.725195345Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.725385833Z" level=info msg="Ran pod sandbox ad699d4f8d4b83c45466c4af7b3361a84a6500754140b44b528bae6eb5ca66de with infra container: kube-system/kube-proxy-fv54r/POD" id=1021e5c2-b76a-45fd-8384-b222a72c1a67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.726044351Z" level=info msg="Ran pod sandbox 7260abff5d2986b94cabe3c68f5b8f6abaa4948fa5c793e0b9817a8f0dc610f9 with infra container: kube-system/kindnet-kv2gc/POD" id=210811b3-c857-4aea-9615-0f6c9949af36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.726604885Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=aab56b8e-7e19-40b3-a3d4-7a449d18eb62 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.727126685Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=99fc3671-63d1-4251-82e9-4991bbc4ad2f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.72758459Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=db808d20-d7fd-4db4-9dbc-546108e1cb0a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.728009254Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d20ccde7-8784-4c18-ae89-6de3041c6f88 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.728537828Z" level=info msg="Creating container: kube-system/kube-proxy-fv54r/kube-proxy" id=681d3126-0783-428f-b8df-99db86f18ee5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.728653681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.729045969Z" level=info msg="Creating container: kube-system/kindnet-kv2gc/kindnet-cni" id=716270a7-affe-4e39-ba68-070072ded2b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.729130942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.734257828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.734895009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.734911574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.735471905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.76608309Z" level=info msg="Created container 541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c: kube-system/kindnet-kv2gc/kindnet-cni" id=716270a7-affe-4e39-ba68-070072ded2b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.766725093Z" level=info msg="Starting container: 541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c" id=05d388d7-41f7-4887-9ef9-a66ee03b4688 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.768473063Z" level=info msg="Started container" PID=1059 containerID=541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c description=kube-system/kindnet-kv2gc/kindnet-cni id=05d388d7-41f7-4887-9ef9-a66ee03b4688 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7260abff5d2986b94cabe3c68f5b8f6abaa4948fa5c793e0b9817a8f0dc610f9
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.769401795Z" level=info msg="Created container 2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab: kube-system/kube-proxy-fv54r/kube-proxy" id=681d3126-0783-428f-b8df-99db86f18ee5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.769998961Z" level=info msg="Starting container: 2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab" id=a8fb788d-450e-41c9-8573-8d74e42098fd name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:52:59 newest-cni-641599 crio[523]: time="2025-12-06T09:52:59.772876761Z" level=info msg="Started container" PID=1060 containerID=2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab description=kube-system/kube-proxy-fv54r/kube-proxy id=a8fb788d-450e-41c9-8573-8d74e42098fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad699d4f8d4b83c45466c4af7b3361a84a6500754140b44b528bae6eb5ca66de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	541e05d2a83b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   7260abff5d298       kindnet-kv2gc                               kube-system
	2a35d15d22772       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   ad699d4f8d4b8       kube-proxy-fv54r                            kube-system
	a1efa0049a5e4       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   9 seconds ago       Running             kube-apiserver            1                   02ee36e66f655       kube-apiserver-newest-cni-641599            kube-system
	a2224fd9810ea       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   9 seconds ago       Running             kube-controller-manager   1                   f9d69f5258f23       kube-controller-manager-newest-cni-641599   kube-system
	c443bd6749568       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   9 seconds ago       Running             kube-scheduler            1                   e04480fe34bb8       kube-scheduler-newest-cni-641599            kube-system
	abc6fabf45d2c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   9 seconds ago       Running             etcd                      1                   320a85234a1bb       etcd-newest-cni-641599                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-641599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-641599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=newest-cni-641599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-641599
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 06 Dec 2025 09:52:59 +0000   Sat, 06 Dec 2025 09:52:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-641599
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                46a78757-c37e-4e88-b08d-a951bd452cce
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-641599                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-kv2gc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-641599             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-641599    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-fv54r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-641599             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  34s   node-controller  Node newest-cni-641599 event: Registered Node newest-cni-641599 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-641599 event: Registered Node newest-cni-641599 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [abc6fabf45d2cdd69c8b919ddced2c3b7dd69d0310045183df185cf3d07b284c] <==
	{"level":"warn","ts":"2025-12-06T09:52:58.273410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.281204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.290229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.297209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.304023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.310163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.316390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.322715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.328805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.335426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.342653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.349496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.355595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.362410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.370913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.377303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.384386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.398644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.404937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.411982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.428807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.434951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.441323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.447744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:58.493556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:53:06 up  2:35,  0 user,  load average: 4.77, 3.16, 3.31
	Linux newest-cni-641599 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [541e05d2a83b839f4cff431bb3b684a81f598d44c4f2d18c25188da52515256c] <==
	I1206 09:52:59.960691       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:52:59.960884       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:52:59.960980       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:52:59.960998       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:52:59.961014       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:53:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:53:00.255853       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:53:00.255883       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:53:00.256067       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:53:00.256071       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:53:00.756299       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:53:00.756333       1 metrics.go:72] Registering metrics
	I1206 09:53:00.756427       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a1efa0049a5e41389449d11959e86aeb1bb15165128f6d01381c3dfa79fa66d1] <==
	I1206 09:52:58.970310       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:58.971443       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:52:58.972388       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:52:58.971601       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:52:59.011045       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:52:59.017895       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:59.017921       1 policy_source.go:248] refreshing policies
	I1206 09:52:59.029768       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:59.043410       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:52:59.064520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:52:59.066275       1 cache.go:39] Caches are synced for autoregister controller
	E1206 09:52:59.072557       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:52:59.072805       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:52:59.275362       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:52:59.300950       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:52:59.320886       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:52:59.328066       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:52:59.333872       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:52:59.364012       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.187.198"}
	I1206 09:52:59.372916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.248.93"}
	I1206 09:52:59.872889       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:53:02.421403       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:02.620658       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:02.620659       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:02.720989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a2224fd9810ea369e75063d82334ae15de55447aae16a68e1d51d07fb3fd2529] <==
	I1206 09:53:02.201154       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201177       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.200979       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201000       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201009       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201472       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.200991       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.201740       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:53:02.202029       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.204692       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.206504       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.212008       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.223220       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-641599"
	I1206 09:53:02.224537       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:53:02.231069       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.241419       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.241842       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.242282       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.247322       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.247645       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.259534       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.299873       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.301117       1 shared_informer.go:377] "Caches are synced"
	I1206 09:53:02.301141       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:53:02.301149       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [2a35d15d22772369f0bd2cd903ca48cfd7ce47fff00bc783138a458455692bab] <==
	I1206 09:52:59.814106       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:52:59.882673       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:59.983184       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:59.983254       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 09:52:59.983363       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:53:00.006565       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:53:00.006713       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:53:00.021251       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:53:00.021734       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:53:00.021797       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:00.023188       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:53:00.023222       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:53:00.023250       1 config.go:200] "Starting service config controller"
	I1206 09:53:00.023256       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:53:00.023253       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:53:00.023268       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:53:00.023324       1 config.go:309] "Starting node config controller"
	I1206 09:53:00.023330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:53:00.023339       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:53:00.123381       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:53:00.123413       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:53:00.123501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c443bd6749568678aa996c237f5b1ab8eed91498aad2fde0c70028e891a8345d] <==
	I1206 09:52:57.265150       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:52:58.889472       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:52:58.889513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:52:58.889525       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:52:58.889534       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:52:58.940333       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:52:58.940427       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:52:58.946315       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:52:58.946402       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:58.947105       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:52:58.947199       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:52:58.962843       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1206 09:52:58.963283       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:58.963313       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:58.965004       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1206 09:52:58.963411       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:52:58.963439       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1206 09:52:58.963341       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:58.978488       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1206 09:52:58.978779       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:52:58.978821       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:58.978452       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1206 09:52:58.978615       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	I1206 09:53:00.548896       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.053059     675 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.054002     675 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.067800     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-641599\" already exists" pod="kube-system/kube-controller-manager-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.067846     675 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.076954     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-641599\" already exists" pod="kube-system/kube-scheduler-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.076989     675 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.082763     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-641599\" already exists" pod="kube-system/etcd-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.082797     675 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.091205     675 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-641599\" already exists" pod="kube-system/kube-apiserver-newest-cni-641599"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.411022     675 apiserver.go:52] "Watching apiserver"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.415960     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-641599" containerName="kube-controller-manager"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.449060     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-641599" containerName="kube-scheduler"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.449198     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-641599" containerName="kube-apiserver"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: E1206 09:52:59.449423     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-641599" containerName="etcd"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.515652     675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613438     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-cni-cfg\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613494     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-lib-modules\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613590     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f27b79f-29eb-4e3e-9a65-fbc2529e4f09-xtables-lock\") pod \"kindnet-kv2gc\" (UID: \"0f27b79f-29eb-4e3e-9a65-fbc2529e4f09\") " pod="kube-system/kindnet-kv2gc"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613626     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b74c4162-c9cd-43a6-9a4a-2162b2899489-xtables-lock\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:52:59 newest-cni-641599 kubelet[675]: I1206 09:52:59.613715     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74c4162-c9cd-43a6-9a4a-2162b2899489-lib-modules\") pod \"kube-proxy-fv54r\" (UID: \"b74c4162-c9cd-43a6-9a4a-2162b2899489\") " pod="kube-system/kube-proxy-fv54r"
	Dec 06 09:53:00 newest-cni-641599 kubelet[675]: E1206 09:53:00.028398     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-641599" containerName="kube-controller-manager"
	Dec 06 09:53:01 newest-cni-641599 kubelet[675]: E1206 09:53:01.353013     675 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-641599" containerName="kube-apiserver"
	Dec 06 09:53:01 newest-cni-641599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:01 newest-cni-641599 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:01 newest-cni-641599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-641599 -n newest-cni-641599
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-641599 -n newest-cni-641599: exit status 2 (333.826074ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-641599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz: exit status 1 (82.157027ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-8njm9" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6smnm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-lczmz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-641599 describe pod coredns-7d764666f9-8njm9 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6smnm kubernetes-dashboard-b84665fb8-lczmz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-521770 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-521770 --alsologtostderr -v=1: exit status 80 (1.551231479s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-521770 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:53:27.040349  799803 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:27.040628  799803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:27.040641  799803 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:27.040648  799803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:27.040878  799803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:27.041123  799803 out.go:368] Setting JSON to false
	I1206 09:53:27.041143  799803 mustload.go:66] Loading cluster: no-preload-521770
	I1206 09:53:27.041496  799803 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:53:27.041856  799803 cli_runner.go:164] Run: docker container inspect no-preload-521770 --format={{.State.Status}}
	I1206 09:53:27.060027  799803 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:53:27.060298  799803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:27.119767  799803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:53:27.109828267 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:27.120357  799803 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-521770 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:53:27.122190  799803 out.go:179] * Pausing node no-preload-521770 ... 
	I1206 09:53:27.123288  799803 host.go:66] Checking if "no-preload-521770" exists ...
	I1206 09:53:27.123599  799803 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:27.123642  799803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-521770
	I1206 09:53:27.141676  799803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/no-preload-521770/id_rsa Username:docker}
	I1206 09:53:27.235382  799803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:27.266503  799803 pause.go:52] kubelet running: true
	I1206 09:53:27.266592  799803 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:27.441402  799803 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:27.441519  799803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:27.512206  799803 cri.go:89] found id: "2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94"
	I1206 09:53:27.512238  799803 cri.go:89] found id: "e95c1b791a64eddfbcbb348c3a235e0708db18cf4a7f64bb9a7fff385ba3c65f"
	I1206 09:53:27.512245  799803 cri.go:89] found id: "8ed2e44ffcad005a51eaa2da515c456cbde31d1c0f0f8025b9411ffda44f5ff8"
	I1206 09:53:27.512250  799803 cri.go:89] found id: "df87b3fa3a4a208955c1a48e6d46a19a5567b0311b97242991aea76fc0d6487e"
	I1206 09:53:27.512256  799803 cri.go:89] found id: "7e7db9271d27970d4ad67bfb8b35bb164eefe0492d4f17948f191a67d54e12bf"
	I1206 09:53:27.512261  799803 cri.go:89] found id: "9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0"
	I1206 09:53:27.512265  799803 cri.go:89] found id: "4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5"
	I1206 09:53:27.512270  799803 cri.go:89] found id: "1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692"
	I1206 09:53:27.512275  799803 cri.go:89] found id: "585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068"
	I1206 09:53:27.512301  799803 cri.go:89] found id: "a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845"
	I1206 09:53:27.512309  799803 cri.go:89] found id: "e99a0c409aee494e84e9717b2c81fbd6716d787c2a6936c23c10595e6f8dc302"
	I1206 09:53:27.512312  799803 cri.go:89] found id: ""
	I1206 09:53:27.512364  799803 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:27.524736  799803 retry.go:31] will retry after 132.608277ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:27Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:27.658125  799803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:27.671531  799803 pause.go:52] kubelet running: false
	I1206 09:53:27.671591  799803 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:27.829889  799803 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:27.829993  799803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:27.898818  799803 cri.go:89] found id: "2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94"
	I1206 09:53:27.898841  799803 cri.go:89] found id: "e95c1b791a64eddfbcbb348c3a235e0708db18cf4a7f64bb9a7fff385ba3c65f"
	I1206 09:53:27.898845  799803 cri.go:89] found id: "8ed2e44ffcad005a51eaa2da515c456cbde31d1c0f0f8025b9411ffda44f5ff8"
	I1206 09:53:27.898848  799803 cri.go:89] found id: "df87b3fa3a4a208955c1a48e6d46a19a5567b0311b97242991aea76fc0d6487e"
	I1206 09:53:27.898851  799803 cri.go:89] found id: "7e7db9271d27970d4ad67bfb8b35bb164eefe0492d4f17948f191a67d54e12bf"
	I1206 09:53:27.898855  799803 cri.go:89] found id: "9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0"
	I1206 09:53:27.898859  799803 cri.go:89] found id: "4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5"
	I1206 09:53:27.898864  799803 cri.go:89] found id: "1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692"
	I1206 09:53:27.898868  799803 cri.go:89] found id: "585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068"
	I1206 09:53:27.898877  799803 cri.go:89] found id: "a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845"
	I1206 09:53:27.898881  799803 cri.go:89] found id: "e99a0c409aee494e84e9717b2c81fbd6716d787c2a6936c23c10595e6f8dc302"
	I1206 09:53:27.898885  799803 cri.go:89] found id: ""
	I1206 09:53:27.898930  799803 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:27.913005  799803 retry.go:31] will retry after 344.092221ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:27Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:28.257545  799803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:28.271361  799803 pause.go:52] kubelet running: false
	I1206 09:53:28.271416  799803 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:28.429869  799803 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:28.429950  799803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:28.497941  799803 cri.go:89] found id: "2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94"
	I1206 09:53:28.497960  799803 cri.go:89] found id: "e95c1b791a64eddfbcbb348c3a235e0708db18cf4a7f64bb9a7fff385ba3c65f"
	I1206 09:53:28.497964  799803 cri.go:89] found id: "8ed2e44ffcad005a51eaa2da515c456cbde31d1c0f0f8025b9411ffda44f5ff8"
	I1206 09:53:28.497967  799803 cri.go:89] found id: "df87b3fa3a4a208955c1a48e6d46a19a5567b0311b97242991aea76fc0d6487e"
	I1206 09:53:28.497971  799803 cri.go:89] found id: "7e7db9271d27970d4ad67bfb8b35bb164eefe0492d4f17948f191a67d54e12bf"
	I1206 09:53:28.497974  799803 cri.go:89] found id: "9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0"
	I1206 09:53:28.497977  799803 cri.go:89] found id: "4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5"
	I1206 09:53:28.497980  799803 cri.go:89] found id: "1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692"
	I1206 09:53:28.497983  799803 cri.go:89] found id: "585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068"
	I1206 09:53:28.497989  799803 cri.go:89] found id: "a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845"
	I1206 09:53:28.497992  799803 cri.go:89] found id: "e99a0c409aee494e84e9717b2c81fbd6716d787c2a6936c23c10595e6f8dc302"
	I1206 09:53:28.497994  799803 cri.go:89] found id: ""
	I1206 09:53:28.498029  799803 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:28.512467  799803 out.go:203] 
	W1206 09:53:28.514466  799803 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:53:28.514505  799803 out.go:285] * 
	* 
	W1206 09:53:28.519805  799803 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:53:28.523446  799803 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-521770 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-521770
helpers_test.go:243: (dbg) docker inspect no-preload-521770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f",
	        "Created": "2025-12-06T09:51:06.611954102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 782292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:24.51201288Z",
	            "FinishedAt": "2025-12-06T09:52:23.527367682Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/hostname",
	        "HostsPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/hosts",
	        "LogPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f-json.log",
	        "Name": "/no-preload-521770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-521770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-521770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f",
	                "LowerDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-521770",
	                "Source": "/var/lib/docker/volumes/no-preload-521770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-521770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-521770",
	                "name.minikube.sigs.k8s.io": "no-preload-521770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3c990fe611afd33b23de21ebfd2a6301980eadf0d107a519c7785b552aaa36f0",
	            "SandboxKey": "/var/run/docker/netns/3c990fe611af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-521770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "335ab24bf65197b10f86bad2a0ebe3cc633e48da6bfe1bab2aae94fda11c69b4",
	                    "EndpointID": "d72c6091b7796330a4e8e6b1dcf4ad02a11e3fb5068b87de6c78774957b19dfa",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ca:ec:1f:71:ea:5c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-521770",
	                        "de37f97672bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770: exit status 2 (373.462431ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-521770 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-521770 logs -n 25: (1.212483994s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p no-preload-521770 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p stopped-upgrade-031481                                                                                                                                                                                                                            │ stopped-upgrade-031481       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-983381                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ no-preload-521770 image list --format=json                                                                                                                                                                                                           │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p no-preload-521770 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:10.314598  796626 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:10.314906  796626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:10.314917  796626 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:10.314923  796626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:10.315255  796626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:10.315874  796626 out.go:368] Setting JSON to false
	I1206 09:53:10.317570  796626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9334,"bootTime":1765005456,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:10.317651  796626 start.go:143] virtualization: kvm guest
	I1206 09:53:10.321620  796626 out.go:179] * [auto-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:10.323587  796626 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:10.323698  796626 notify.go:221] Checking for updates...
	I1206 09:53:10.325764  796626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:10.329609  796626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:10.330739  796626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:10.331787  796626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:10.332975  796626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:10.334901  796626 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:10.335068  796626 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:10.335201  796626 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:53:10.335332  796626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:10.367684  796626 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:10.367791  796626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:10.442629  796626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-06 09:53:10.429605617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:10.442794  796626 docker.go:319] overlay module found
	I1206 09:53:10.444478  796626 out.go:179] * Using the docker driver based on user configuration
	I1206 09:53:10.445531  796626 start.go:309] selected driver: docker
	I1206 09:53:10.445551  796626 start.go:927] validating driver "docker" against <nil>
	I1206 09:53:10.445569  796626 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:10.446412  796626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:10.518950  796626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-06 09:53:10.506081396 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:10.519164  796626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:53:10.519507  796626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:10.522573  796626 out.go:179] * Using Docker driver with root privileges
	I1206 09:53:10.523570  796626 cni.go:84] Creating CNI manager for ""
	I1206 09:53:10.523673  796626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:10.523689  796626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:53:10.523789  796626 start.go:353] cluster config:
	{Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1206 09:53:10.524986  796626 out.go:179] * Starting "auto-983381" primary control-plane node in "auto-983381" cluster
	I1206 09:53:10.526019  796626 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:10.527138  796626 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:10.528368  796626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:10.528411  796626 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:10.528425  796626 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:10.528485  796626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:10.528561  796626 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:10.528579  796626 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:10.528730  796626 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/config.json ...
	I1206 09:53:10.528760  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/config.json: {Name:mk8ffbb4e65ebd7712373ae725b794a8a70e0dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:10.553808  796626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:10.553830  796626 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:10.553852  796626 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:10.553893  796626 start.go:360] acquireMachinesLock for auto-983381: {Name:mkab719bcf4a9828bf3d3e79d20d83abeb871df6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:10.554010  796626 start.go:364] duration metric: took 93.997µs to acquireMachinesLock for "auto-983381"
	I1206 09:53:10.554039  796626 start.go:93] Provisioning new machine with config: &{Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:10.554130  796626 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:53:09.212965  792441 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:09.213030  792441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:09.213102  792441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:53:09.213000  792441 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:53:09.250842  792441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:53:09.256608  792441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:53:09.259598  792441 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:09.259620  792441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:53:09.259672  792441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:53:09.299891  792441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:53:09.384035  792441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:09.409667  792441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:09.409910  792441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-997968" to be "Ready" ...
	I1206 09:53:09.424918  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:53:09.424951  792441 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:53:09.447122  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:53:09.447152  792441 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:53:09.453930  792441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:09.466241  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:53:09.466262  792441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:53:09.486081  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:53:09.486106  792441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:53:09.513420  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:53:09.513450  792441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:53:09.533054  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:53:09.533076  792441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:53:09.549987  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:53:09.550074  792441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:53:09.565345  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:53:09.565369  792441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:53:09.581485  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:53:09.581512  792441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:53:09.602415  792441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:53:10.869484  792441 node_ready.go:49] node "embed-certs-997968" is "Ready"
	I1206 09:53:10.869523  792441 node_ready.go:38] duration metric: took 1.459586599s for node "embed-certs-997968" to be "Ready" ...
	I1206 09:53:10.869543  792441 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:53:10.869603  792441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:53:11.473511  792441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.063755283s)
	I1206 09:53:11.473562  792441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019588987s)
	I1206 09:53:11.473702  792441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871247295s)
	I1206 09:53:11.473748  792441 api_server.go:72] duration metric: took 2.297485271s to wait for apiserver process to appear ...
	I1206 09:53:11.473767  792441 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:53:11.473973  792441 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:53:11.476724  792441 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-997968 addons enable metrics-server
	
	I1206 09:53:11.479668  792441 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:53:11.479693  792441 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:53:11.490923  792441 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1206 09:53:09.996622  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	W1206 09:53:12.496132  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	I1206 09:53:13.495106  782026 pod_ready.go:94] pod "coredns-7d764666f9-mhwh5" is "Ready"
	I1206 09:53:13.495137  782026 pod_ready.go:86] duration metric: took 39.005759854s for pod "coredns-7d764666f9-mhwh5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.498108  782026 pod_ready.go:83] waiting for pod "etcd-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.502452  782026 pod_ready.go:94] pod "etcd-no-preload-521770" is "Ready"
	I1206 09:53:13.502503  782026 pod_ready.go:86] duration metric: took 4.370843ms for pod "etcd-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.504713  782026 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.509041  782026 pod_ready.go:94] pod "kube-apiserver-no-preload-521770" is "Ready"
	I1206 09:53:13.509064  782026 pod_ready.go:86] duration metric: took 4.32904ms for pod "kube-apiserver-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.510960  782026 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.693730  782026 pod_ready.go:94] pod "kube-controller-manager-no-preload-521770" is "Ready"
	I1206 09:53:13.693763  782026 pod_ready.go:86] duration metric: took 182.77926ms for pod "kube-controller-manager-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.894574  782026 pod_ready.go:83] waiting for pod "kube-proxy-t7vrx" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:53:11.699708  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:14.198625  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:14.486992  782026 pod_ready.go:94] pod "kube-proxy-t7vrx" is "Ready"
	I1206 09:53:14.487023  782026 pod_ready.go:86] duration metric: took 592.413178ms for pod "kube-proxy-t7vrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:14.493871  782026 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:14.892551  782026 pod_ready.go:94] pod "kube-scheduler-no-preload-521770" is "Ready"
	I1206 09:53:14.892586  782026 pod_ready.go:86] duration metric: took 398.684783ms for pod "kube-scheduler-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:14.892602  782026 pod_ready.go:40] duration metric: took 40.475424909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:14.949257  782026 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:53:15.058922  782026 out.go:179] * Done! kubectl is now configured to use "no-preload-521770" cluster and "default" namespace by default
	I1206 09:53:10.559201  796626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:53:10.559524  796626 start.go:159] libmachine.API.Create for "auto-983381" (driver="docker")
	I1206 09:53:10.559576  796626 client.go:173] LocalClient.Create starting
	I1206 09:53:10.559685  796626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:53:10.559739  796626 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:10.559766  796626 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:10.559841  796626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:53:10.559870  796626 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:10.559900  796626 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:10.560369  796626 cli_runner.go:164] Run: docker network inspect auto-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:53:10.584692  796626 cli_runner.go:211] docker network inspect auto-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:53:10.584811  796626 network_create.go:284] running [docker network inspect auto-983381] to gather additional debugging logs...
	I1206 09:53:10.584842  796626 cli_runner.go:164] Run: docker network inspect auto-983381
	W1206 09:53:10.608435  796626 cli_runner.go:211] docker network inspect auto-983381 returned with exit code 1
	I1206 09:53:10.608493  796626 network_create.go:287] error running [docker network inspect auto-983381]: docker network inspect auto-983381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-983381 not found
	I1206 09:53:10.608521  796626 network_create.go:289] output of [docker network inspect auto-983381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-983381 not found
	
	** /stderr **
	I1206 09:53:10.608634  796626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:10.632296  796626 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:53:10.633370  796626 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:53:10.634241  796626 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:53:10.635187  796626 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d50290}
	I1206 09:53:10.635229  796626 network_create.go:124] attempt to create docker network auto-983381 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1206 09:53:10.635300  796626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-983381 auto-983381
	I1206 09:53:10.716818  796626 network_create.go:108] docker network auto-983381 192.168.76.0/24 created
	I1206 09:53:10.716874  796626 kic.go:121] calculated static IP "192.168.76.2" for the "auto-983381" container
	I1206 09:53:10.716966  796626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:53:10.755068  796626 cli_runner.go:164] Run: docker volume create auto-983381 --label name.minikube.sigs.k8s.io=auto-983381 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:53:10.783172  796626 oci.go:103] Successfully created a docker volume auto-983381
	I1206 09:53:10.783286  796626 cli_runner.go:164] Run: docker run --rm --name auto-983381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-983381 --entrypoint /usr/bin/test -v auto-983381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:53:11.542483  796626 oci.go:107] Successfully prepared a docker volume auto-983381
	I1206 09:53:11.542589  796626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:11.542608  796626 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:53:11.542785  796626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:53:11.494191  792441 addons.go:530] duration metric: took 2.317774385s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:53:11.974112  792441 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:53:11.979033  792441 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:53:11.979069  792441 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:53:12.474620  792441 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:53:12.480642  792441 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 09:53:12.481903  792441 api_server.go:141] control plane version: v1.34.2
	I1206 09:53:12.481937  792441 api_server.go:131] duration metric: took 1.008162753s to wait for apiserver health ...
	I1206 09:53:12.481949  792441 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:53:12.485497  792441 system_pods.go:59] 8 kube-system pods found
	I1206 09:53:12.485552  792441 system_pods.go:61] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:53:12.485567  792441 system_pods.go:61] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:53:12.485576  792441 system_pods.go:61] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:53:12.485595  792441 system_pods.go:61] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:53:12.485604  792441 system_pods.go:61] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:53:12.485611  792441 system_pods.go:61] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:53:12.485619  792441 system_pods.go:61] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:53:12.485624  792441 system_pods.go:61] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Running
	I1206 09:53:12.485632  792441 system_pods.go:74] duration metric: took 3.67554ms to wait for pod list to return data ...
	I1206 09:53:12.485642  792441 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:53:12.488382  792441 default_sa.go:45] found service account: "default"
	I1206 09:53:12.488409  792441 default_sa.go:55] duration metric: took 2.759903ms for default service account to be created ...
	I1206 09:53:12.488419  792441 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:53:12.491886  792441 system_pods.go:86] 8 kube-system pods found
	I1206 09:53:12.491921  792441 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:53:12.491932  792441 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:53:12.491942  792441 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:53:12.491950  792441 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:53:12.491958  792441 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:53:12.491963  792441 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:53:12.491971  792441 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:53:12.491983  792441 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Running
	I1206 09:53:12.491993  792441 system_pods.go:126] duration metric: took 3.566416ms to wait for k8s-apps to be running ...
	I1206 09:53:12.492005  792441 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:53:12.492061  792441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:12.509669  792441 system_svc.go:56] duration metric: took 17.650745ms WaitForService to wait for kubelet
	I1206 09:53:12.509700  792441 kubeadm.go:587] duration metric: took 3.333438676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:12.509723  792441 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:53:12.513432  792441 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:53:12.513490  792441 node_conditions.go:123] node cpu capacity is 8
	I1206 09:53:12.513508  792441 node_conditions.go:105] duration metric: took 3.778313ms to run NodePressure ...
	I1206 09:53:12.513526  792441 start.go:242] waiting for startup goroutines ...
	I1206 09:53:12.513535  792441 start.go:247] waiting for cluster config update ...
	I1206 09:53:12.513550  792441 start.go:256] writing updated cluster config ...
	I1206 09:53:12.513876  792441 ssh_runner.go:195] Run: rm -f paused
	I1206 09:53:12.519117  792441 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:12.523919  792441 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:53:14.661295  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:16.698193  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:19.197363  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:16.779749  796626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.236886664s)
	I1206 09:53:16.779790  796626 kic.go:203] duration metric: took 5.237178224s to extract preloaded images to volume ...
	W1206 09:53:16.779905  796626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:53:16.779953  796626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:53:16.780021  796626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:53:16.865941  796626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-983381 --name auto-983381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-983381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-983381 --network auto-983381 --ip 192.168.76.2 --volume auto-983381:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:53:17.268388  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Running}}
	I1206 09:53:17.292606  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:17.319426  796626 cli_runner.go:164] Run: docker exec auto-983381 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:53:17.380406  796626 oci.go:144] the created container "auto-983381" has a running status.
	I1206 09:53:17.380447  796626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa...
	I1206 09:53:17.599095  796626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:53:17.637079  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:17.668664  796626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:53:17.668687  796626 kic_runner.go:114] Args: [docker exec --privileged auto-983381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:53:17.735989  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:17.754909  796626 machine.go:94] provisionDockerMachine start ...
	I1206 09:53:17.755007  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:17.776534  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:17.776925  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:17.776960  796626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:53:17.932625  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-983381
	
	I1206 09:53:17.932658  796626 ubuntu.go:182] provisioning hostname "auto-983381"
	I1206 09:53:17.932733  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:17.956406  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:17.956810  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:17.956833  796626 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-983381 && echo "auto-983381" | sudo tee /etc/hostname
	I1206 09:53:18.117199  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-983381
	
	I1206 09:53:18.117296  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:18.141288  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:18.141665  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:18.141692  796626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-983381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-983381/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-983381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:53:18.287790  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:53:18.287824  796626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:53:18.287859  796626 ubuntu.go:190] setting up certificates
	I1206 09:53:18.287884  796626 provision.go:84] configureAuth start
	I1206 09:53:18.287960  796626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-983381
	I1206 09:53:18.307507  796626 provision.go:143] copyHostCerts
	I1206 09:53:18.307583  796626 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:53:18.307598  796626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:53:18.307835  796626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:53:18.307969  796626 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:53:18.307985  796626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:53:18.308028  796626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:53:18.308109  796626 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:53:18.308120  796626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:53:18.308170  796626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:53:18.308240  796626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.auto-983381 san=[127.0.0.1 192.168.76.2 auto-983381 localhost minikube]
	I1206 09:53:18.580115  796626 provision.go:177] copyRemoteCerts
	I1206 09:53:18.580184  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:53:18.580241  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:18.605051  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:18.710847  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:53:18.736421  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1206 09:53:18.760005  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:53:18.784425  796626 provision.go:87] duration metric: took 496.52003ms to configureAuth
	I1206 09:53:18.784470  796626 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:53:18.784690  796626 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:18.784827  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:18.807839  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:18.808149  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:18.808177  796626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:53:19.100886  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:53:19.100934  796626 machine.go:97] duration metric: took 1.346001172s to provisionDockerMachine
	I1206 09:53:19.100952  796626 client.go:176] duration metric: took 8.541367609s to LocalClient.Create
	I1206 09:53:19.100980  796626 start.go:167] duration metric: took 8.541459534s to libmachine.API.Create "auto-983381"
	I1206 09:53:19.100997  796626 start.go:293] postStartSetup for "auto-983381" (driver="docker")
	I1206 09:53:19.101014  796626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:53:19.101097  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:53:19.101153  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.121148  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.220339  796626 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:53:19.224805  796626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:53:19.224843  796626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:53:19.224857  796626 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:53:19.224921  796626 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:53:19.225039  796626 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:53:19.225178  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:53:19.233348  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:19.253961  796626 start.go:296] duration metric: took 152.924749ms for postStartSetup
	I1206 09:53:19.254276  796626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-983381
	I1206 09:53:19.271744  796626 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/config.json ...
	I1206 09:53:19.272001  796626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:53:19.272051  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.289588  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.388709  796626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:53:19.393578  796626 start.go:128] duration metric: took 8.839434366s to createHost
	I1206 09:53:19.393601  796626 start.go:83] releasing machines lock for "auto-983381", held for 8.839578656s
	I1206 09:53:19.393687  796626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-983381
	I1206 09:53:19.411069  796626 ssh_runner.go:195] Run: cat /version.json
	I1206 09:53:19.411131  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.411137  796626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:53:19.411213  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.430385  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.430781  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.574328  796626 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:19.581326  796626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:53:19.617087  796626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:53:19.622054  796626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:53:19.622144  796626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:53:19.648072  796626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:53:19.648095  796626 start.go:496] detecting cgroup driver to use...
	I1206 09:53:19.648131  796626 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:53:19.648185  796626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:53:19.664452  796626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:53:19.676559  796626 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:53:19.676609  796626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:53:19.692657  796626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:53:19.712532  796626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:53:19.795865  796626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:53:19.885579  796626 docker.go:234] disabling docker service ...
	I1206 09:53:19.885656  796626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:53:19.906614  796626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:53:19.921011  796626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:53:20.009418  796626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:53:20.090829  796626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:53:20.103872  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:53:20.119288  796626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:53:20.119354  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.130546  796626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:53:20.130611  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.140002  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.149198  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.158672  796626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:53:20.167366  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.176677  796626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.190832  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.200879  796626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:53:20.209137  796626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:53:20.216989  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:20.309704  796626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:53:20.832598  796626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:53:20.832674  796626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:53:20.838156  796626 start.go:564] Will wait 60s for crictl version
	I1206 09:53:20.838218  796626 ssh_runner.go:195] Run: which crictl
	I1206 09:53:20.843152  796626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:53:20.878090  796626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:53:20.878179  796626 ssh_runner.go:195] Run: crio --version
	I1206 09:53:20.918917  796626 ssh_runner.go:195] Run: crio --version
	I1206 09:53:20.960624  796626 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:53:17.029902  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:19.030595  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:21.031656  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:20.962027  796626 cli_runner.go:164] Run: docker network inspect auto-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:20.984534  796626 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:20.989870  796626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:21.003644  796626 kubeadm.go:884] updating cluster {Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:21.003807  796626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:21.003897  796626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:21.050155  796626 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:21.050182  796626 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:21.050244  796626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:21.084927  796626 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:21.084957  796626 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:21.084966  796626 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1206 09:53:21.085103  796626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-983381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:53:21.085196  796626 ssh_runner.go:195] Run: crio config
	I1206 09:53:21.147879  796626 cni.go:84] Creating CNI manager for ""
	I1206 09:53:21.147920  796626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:21.147945  796626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:21.147977  796626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-983381 NodeName:auto-983381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:21.148167  796626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-983381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:21.148255  796626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:21.159057  796626 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:21.159127  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:21.169519  796626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1206 09:53:21.188281  796626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:21.208263  796626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1206 09:53:21.225796  796626 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:21.230699  796626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:21.244127  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:21.355322  796626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:21.387784  796626 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381 for IP: 192.168.76.2
	I1206 09:53:21.387808  796626 certs.go:195] generating shared ca certs ...
	I1206 09:53:21.387830  796626 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.388011  796626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:21.388094  796626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:21.388117  796626 certs.go:257] generating profile certs ...
	I1206 09:53:21.388199  796626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.key
	I1206 09:53:21.388220  796626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.crt with IP's: []
	I1206 09:53:21.527779  796626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.crt ...
	I1206 09:53:21.527813  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.crt: {Name:mk2ad50101add546ad51db7e44569749ea1a5f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.528042  796626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.key ...
	I1206 09:53:21.528065  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.key: {Name:mk6b957f3fb4700b5610e36b0ef33f28d90a8bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.528199  796626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02
	I1206 09:53:21.528221  796626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1206 09:53:21.559539  796626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02 ...
	I1206 09:53:21.559571  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02: {Name:mkfde50eb4629d079ca9f921db7a290f4181d6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.559774  796626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02 ...
	I1206 09:53:21.559795  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02: {Name:mkfb936c148fef11e349055de9ebaf61d63b8a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.559951  796626 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt
	I1206 09:53:21.560085  796626 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key
	I1206 09:53:21.560199  796626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key
	I1206 09:53:21.560231  796626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt with IP's: []
	I1206 09:53:21.715899  796626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt ...
	I1206 09:53:21.715932  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt: {Name:mkdc8ba936c68350260908df68becc7b02551b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.716144  796626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key ...
	I1206 09:53:21.716169  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key: {Name:mk536bcfaff43bc1e0f5d3922abe12261c37b246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.716380  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:21.716421  796626 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:21.716432  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:21.716468  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:21.716493  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:21.716520  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:21.716592  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:21.717258  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:21.741669  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:21.764916  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:21.787222  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:21.810264  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1206 09:53:21.830290  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:21.851361  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:21.874437  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:53:21.897806  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:21.925127  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:21.949014  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:21.974019  796626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:21.991183  796626 ssh_runner.go:195] Run: openssl version
	I1206 09:53:21.999540  796626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.010959  796626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:22.021880  796626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.027590  796626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.027666  796626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.089285  796626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:22.100350  796626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:53:22.111529  796626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.122127  796626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:22.132733  796626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.137587  796626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.137649  796626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.195795  796626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:22.207559  796626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:22.218889  796626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.230203  796626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:22.241028  796626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.246203  796626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.246293  796626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.304669  796626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:22.315409  796626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:53:22.327336  796626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:22.332786  796626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:53:22.332859  796626 kubeadm.go:401] StartCluster: {Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:22.332963  796626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:22.333021  796626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:22.373356  796626 cri.go:89] found id: ""
	I1206 09:53:22.373434  796626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:22.384705  796626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:53:22.396467  796626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:53:22.396542  796626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:53:22.408067  796626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:53:22.408090  796626 kubeadm.go:158] found existing configuration files:
	
	I1206 09:53:22.408140  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:53:22.419357  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:53:22.419426  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:53:22.430093  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:53:22.440311  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:53:22.440380  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:53:22.450909  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:53:22.461671  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:53:22.461736  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:53:22.472006  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:53:22.482874  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:53:22.482953  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:53:22.494029  796626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:53:22.548000  796626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:53:22.548072  796626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:53:22.576658  796626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:53:22.576787  796626 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:53:22.576875  796626 kubeadm.go:319] OS: Linux
	I1206 09:53:22.576957  796626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:53:22.577029  796626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:53:22.577099  796626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:53:22.577169  796626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:53:22.577232  796626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:53:22.577310  796626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:53:22.577378  796626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:53:22.577437  796626 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:53:22.657494  796626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:53:22.657632  796626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:53:22.657791  796626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:53:22.666745  796626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1206 09:53:21.198540  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:23.698589  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:22.669604  796626 out.go:252]   - Generating certificates and keys ...
	I1206 09:53:22.669731  796626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:53:22.669850  796626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:53:23.040095  796626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:53:23.215338  796626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:53:23.459338  796626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:53:23.932487  796626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:53:24.081617  796626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:53:24.081768  796626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-983381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:53:24.293675  796626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:53:24.293805  796626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-983381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:53:24.578599  796626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:53:24.665045  796626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:53:24.756902  796626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:53:24.757051  796626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1206 09:53:23.530910  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:26.030073  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:25.443500  796626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:53:25.604502  796626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:53:25.767791  796626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:53:25.882784  796626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:53:26.601886  796626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:53:26.602425  796626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:53:26.605938  796626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 06 09:52:55 no-preload-521770 crio[569]: time="2025-12-06T09:52:55.794271333Z" level=info msg="Started container" PID=1766 containerID=f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper id=6ce0e8f7-c6cc-4f2c-ba74-f58dd964c5fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=02e6cc91bd6818ff962808df3d688db5f5f5a46b6cc5ae0d9e90ff2ee592b007
	Dec 06 09:52:55 no-preload-521770 crio[569]: time="2025-12-06T09:52:55.839550997Z" level=info msg="Removing container: 6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c" id=92ad8357-ef05-46cf-88dd-cbdbe4a6bb49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:52:55 no-preload-521770 crio[569]: time="2025-12-06T09:52:55.849450172Z" level=info msg="Removed container 6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=92ad8357-ef05-46cf-88dd-cbdbe4a6bb49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.86480105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=31107f80-f18a-4f05-b7ac-a0cad986d42b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.865859929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3d23702f-b45a-4fe7-a6f4-fb000888595c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.866937033Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0d7151bd-84b4-4454-92b5-c7f3b4bfa938 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.86708298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871445121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871676606Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b6a642ad33efe87b54c04dbc7f33290f95d008640c794a5adf9132192c1e458c/merged/etc/passwd: no such file or directory"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871710845Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b6a642ad33efe87b54c04dbc7f33290f95d008640c794a5adf9132192c1e458c/merged/etc/group: no such file or directory"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871979356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.907082579Z" level=info msg="Created container 2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94: kube-system/storage-provisioner/storage-provisioner" id=0d7151bd-84b4-4454-92b5-c7f3b4bfa938 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.907804733Z" level=info msg="Starting container: 2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94" id=745f1026-5c5e-486c-9296-2aa73cf088d5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.9097309Z" level=info msg="Started container" PID=1780 containerID=2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94 description=kube-system/storage-provisioner/storage-provisioner id=745f1026-5c5e-486c-9296-2aa73cf088d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1788d42dc35110d0c49217fe283fa722a77c6c3c4b692d6e137ae061ee6770e
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.741429486Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=13fb6038-40c8-47a1-9013-589743508d13 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.745694219Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4d39913e-42c8-4928-80c8-e0f8819e5371 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.746932122Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=351709c0-514d-43be-a4ab-7faabfc43fa9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.747091831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.763632436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.764314033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.809841785Z" level=info msg="Created container a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=351709c0-514d-43be-a4ab-7faabfc43fa9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.811743228Z" level=info msg="Starting container: a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845" id=64ca8d0f-02c3-4a6d-b9db-14ffdf0bbe4f name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.814945355Z" level=info msg="Started container" PID=1817 containerID=a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper id=64ca8d0f-02c3-4a6d-b9db-14ffdf0bbe4f name=/runtime.v1.RuntimeService/StartContainer sandboxID=02e6cc91bd6818ff962808df3d688db5f5f5a46b6cc5ae0d9e90ff2ee592b007
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.908268241Z" level=info msg="Removing container: f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f" id=10faafc1-fca5-43c9-a320-27187c01ddae name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.954352192Z" level=info msg="Removed container f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=10faafc1-fca5-43c9-a320-27187c01ddae name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a30f79b67fa58       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   3                   02e6cc91bd681       dashboard-metrics-scraper-867fb5f87b-lhdkf   kubernetes-dashboard
	2d35003201fe1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   f1788d42dc351       storage-provisioner                          kube-system
	e99a0c409aee4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   92d59e46022fc       kubernetes-dashboard-b84665fb8-sd5kj         kubernetes-dashboard
	e95c1b791a64e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     0                   dbea96a64c69b       coredns-7d764666f9-mhwh5                     kube-system
	34879bfa0a35c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   b4e470b64e1de       busybox                                      default
	8ed2e44ffcad0       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           55 seconds ago      Running             kube-proxy                  0                   1e9c368bf4708       kube-proxy-t7vrx                             kube-system
	df87b3fa3a4a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   f1788d42dc351       storage-provisioner                          kube-system
	7e7db9271d279       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   8e13eb1a5ae20       kindnet-2w8b5                                kube-system
	9dc873b13be2d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           58 seconds ago      Running             kube-scheduler              0                   54f030d1eb746       kube-scheduler-no-preload-521770             kube-system
	4740c81bbda6e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           58 seconds ago      Running             kube-controller-manager     0                   535d045965bb3       kube-controller-manager-no-preload-521770    kube-system
	1180b54a98400       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   913c92d938e9a       etcd-no-preload-521770                       kube-system
	585f10915444a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           58 seconds ago      Running             kube-apiserver              0                   80b34865a5557       kube-apiserver-no-preload-521770             kube-system
	
	
	==> coredns [e95c1b791a64eddfbcbb348c3a235e0708db18cf4a7f64bb9a7fff385ba3c65f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38202 - 27723 "HINFO IN 7691446011699020337.1410784681661014434. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025257086s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-521770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-521770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=no-preload-521770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_51_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:51:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-521770
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:53:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-521770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                77a79082-49d3-48ca-89e8-de80a1e12164
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-mhwh5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-521770                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-2w8b5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-521770              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-521770     200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-t7vrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-521770              100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-lhdkf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-sd5kj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node no-preload-521770 event: Registered Node no-preload-521770 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-521770 event: Registered Node no-preload-521770 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692] <==
	{"level":"warn","ts":"2025-12-06T09:52:32.119861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.128718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.136268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.143278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.150473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.156959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.163637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.170350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.180601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.186864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.193398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.199916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.206511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.220820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.227540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.234427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.241545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.248336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.260368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.267235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.275653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.283415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.339734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:14.485528Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.604105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-521770\" limit:1 ","response":"range_response_count:1 size:5270"}
	{"level":"info","ts":"2025-12-06T09:53:14.485636Z","caller":"traceutil/trace.go:172","msg":"trace[1154857655] range","detail":"{range_begin:/registry/minions/no-preload-521770; range_end:; response_count:1; response_revision:676; }","duration":"193.730067ms","start":"2025-12-06T09:53:14.291890Z","end":"2025-12-06T09:53:14.485620Z","steps":["trace[1154857655] 'range keys from in-memory index tree'  (duration: 193.443792ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:53:29 up  2:35,  0 user,  load average: 5.35, 3.46, 3.41
	Linux no-preload-521770 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e7db9271d27970d4ad67bfb8b35bb164eefe0492d4f17948f191a67d54e12bf] <==
	I1206 09:52:34.364864       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:52:34.365158       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:52:34.365322       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:52:34.365350       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:52:34.365377       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:52:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:52:34.569489       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:52:34.569539       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:52:34.569622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:52:34.570081       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:52:34.869712       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:52:34.869743       1 metrics.go:72] Registering metrics
	I1206 09:52:34.869798       1 controller.go:711] "Syncing nftables rules"
	I1206 09:52:44.569987       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:52:44.570059       1 main.go:301] handling current node
	I1206 09:52:54.573553       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:52:54.573585       1 main.go:301] handling current node
	I1206 09:53:04.569638       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:53:04.569693       1 main.go:301] handling current node
	I1206 09:53:14.569747       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:53:14.569793       1 main.go:301] handling current node
	I1206 09:53:24.571631       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:53:24.571669       1 main.go:301] handling current node
	
	
	==> kube-apiserver [585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068] <==
	I1206 09:52:32.841898       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.844171       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:52:32.844203       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:52:32.845130       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:52:32.845564       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:52:32.855650       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:32.869013       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:52:32.874628       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.874653       1 policy_source.go:248] refreshing policies
	I1206 09:52:32.935310       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.936302       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:52:32.936326       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:52:32.936600       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:52:33.146555       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:52:33.175478       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:52:33.193607       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:52:33.202616       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:52:33.209992       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:52:33.244699       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.50.12"}
	I1206 09:52:33.255873       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.80.194"}
	I1206 09:52:33.751134       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:52:36.453060       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:52:36.453122       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:52:36.552233       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:52:36.702163       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5] <==
	I1206 09:52:36.006837       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006879       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006947       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006961       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006625       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006655       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007229       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007250       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007271       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007296       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007348       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:52:36.007406       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:52:36.007414       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:36.007419       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007974       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.009204       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.009341       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.009751       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.011745       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.020310       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:36.105940       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.105962       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:52:36.105969       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:52:36.120950       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8ed2e44ffcad005a51eaa2da515c456cbde31d1c0f0f8025b9411ffda44f5ff8] <==
	I1206 09:52:34.184746       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:52:34.256785       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:34.357524       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:34.357698       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:52:34.357824       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:52:34.380396       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:52:34.380469       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:52:34.387136       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:52:34.387511       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:52:34.387527       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:52:34.389420       1 config.go:200] "Starting service config controller"
	I1206 09:52:34.390554       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:52:34.389919       1 config.go:309] "Starting node config controller"
	I1206 09:52:34.390687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:52:34.390708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:52:34.389912       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:52:34.390718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:52:34.389941       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:52:34.390736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:52:34.490840       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:52:34.490851       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:52:34.490879       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0] <==
	E1206 09:52:32.852165       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.852927       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1206 09:52:32.852912       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.854172       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1206 09:52:32.854303       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:32.854396       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.854524       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1206 09:52:32.854679       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1206 09:52:32.854696       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1206 09:52:32.855116       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.855227       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:52:32.855287       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:32.855508       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:52:32.858284       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:32.858579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1206 09:52:32.858735       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1206 09:52:32.858847       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:52:32.858918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1206 09:52:32.858979       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:52:32.859519       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1206 09:52:32.860204       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.860977       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1206 09:52:32.861107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1206 09:52:32.859610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1206 09:52:32.949820       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:52:50 no-preload-521770 kubelet[722]: E1206 09:52:50.318858     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: E1206 09:52:55.741429     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: I1206 09:52:55.741493     722 scope.go:122] "RemoveContainer" containerID="6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: I1206 09:52:55.838282     722 scope.go:122] "RemoveContainer" containerID="6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: E1206 09:52:55.838538     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: I1206 09:52:55.838573     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: E1206 09:52:55.838733     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:00 no-preload-521770 kubelet[722]: E1206 09:53:00.318797     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:00 no-preload-521770 kubelet[722]: I1206 09:53:00.318833     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:53:00 no-preload-521770 kubelet[722]: E1206 09:53:00.319019     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:04 no-preload-521770 kubelet[722]: I1206 09:53:04.864291     722 scope.go:122] "RemoveContainer" containerID="df87b3fa3a4a208955c1a48e6d46a19a5567b0311b97242991aea76fc0d6487e"
	Dec 06 09:53:13 no-preload-521770 kubelet[722]: E1206 09:53:13.404543     722 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mhwh5" containerName="coredns"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: E1206 09:53:16.740842     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: I1206 09:53:16.740887     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: I1206 09:53:16.905163     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: E1206 09:53:16.905550     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: I1206 09:53:16.905582     722 scope.go:122] "RemoveContainer" containerID="a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: E1206 09:53:16.905811     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:20 no-preload-521770 kubelet[722]: E1206 09:53:20.319285     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:20 no-preload-521770 kubelet[722]: I1206 09:53:20.319339     722 scope.go:122] "RemoveContainer" containerID="a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845"
	Dec 06 09:53:20 no-preload-521770 kubelet[722]: E1206 09:53:20.319562     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:27 no-preload-521770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:27 no-preload-521770 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:27 no-preload-521770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:53:27 no-preload-521770 systemd[1]: kubelet.service: Consumed 1.891s CPU time.
	
	
	==> kubernetes-dashboard [e99a0c409aee494e84e9717b2c81fbd6716d787c2a6936c23c10595e6f8dc302] <==
	2025/12/06 09:52:40 Starting overwatch
	2025/12/06 09:52:40 Using namespace: kubernetes-dashboard
	2025/12/06 09:52:40 Using in-cluster config to connect to apiserver
	2025/12/06 09:52:40 Using secret token for csrf signing
	2025/12/06 09:52:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:52:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:52:40 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/06 09:52:40 Generating JWE encryption key
	2025/12/06 09:52:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:52:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:52:41 Initializing JWE encryption key from synchronized object
	2025/12/06 09:52:41 Creating in-cluster Sidecar client
	2025/12/06 09:52:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:52:41 Serving insecurely on HTTP port: 9090
	2025/12/06 09:53:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94] <==
	I1206 09:53:04.922835       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:53:04.930609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:53:04.930664       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:53:04.933611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:08.389383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:12.650839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:16.249018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:19.303357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:22.326335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:22.332666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:22.332876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:53:22.333062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-521770_dd6d35eb-aa45-4f94-98b4-1f2affa87112!
	I1206 09:53:22.333327       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61a34a7f-5161-40bf-8cdb-f26ed1163acf", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-521770_dd6d35eb-aa45-4f94-98b4-1f2affa87112 became leader
	W1206 09:53:22.341814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:22.345254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:22.433788       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-521770_dd6d35eb-aa45-4f94-98b4-1f2affa87112!
	W1206 09:53:24.348672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:24.353824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:26.357414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:26.361224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:28.364391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:28.417517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [df87b3fa3a4a208955c1a48e6d46a19a5567b0311b97242991aea76fc0d6487e] <==
	I1206 09:52:34.128960       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:53:04.137907       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521770 -n no-preload-521770
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521770 -n no-preload-521770: exit status 2 (393.068167ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-521770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-521770
helpers_test.go:243: (dbg) docker inspect no-preload-521770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f",
	        "Created": "2025-12-06T09:51:06.611954102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 782292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:24.51201288Z",
	            "FinishedAt": "2025-12-06T09:52:23.527367682Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/hostname",
	        "HostsPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/hosts",
	        "LogPath": "/var/lib/docker/containers/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f/de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f-json.log",
	        "Name": "/no-preload-521770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-521770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-521770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "de37f97672bc26323a61a2e6f12bae7e57279821f0f4babd41b198d081df704f",
	                "LowerDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63c8e1d0a2b76a84f0279a5b1e1bbe9717fe37fd200a4394c4bc0a3c3e93aefc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-521770",
	                "Source": "/var/lib/docker/volumes/no-preload-521770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-521770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-521770",
	                "name.minikube.sigs.k8s.io": "no-preload-521770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3c990fe611afd33b23de21ebfd2a6301980eadf0d107a519c7785b552aaa36f0",
	            "SandboxKey": "/var/run/docker/netns/3c990fe611af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-521770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "335ab24bf65197b10f86bad2a0ebe3cc633e48da6bfe1bab2aae94fda11c69b4",
	                    "EndpointID": "d72c6091b7796330a4e8e6b1dcf4ad02a11e3fb5068b87de6c78774957b19dfa",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ca:ec:1f:71:ea:5c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-521770",
	                        "de37f97672bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770: exit status 2 (321.472955ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-521770 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-521770 logs -n 25: (1.203480849s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p no-preload-521770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p no-preload-521770 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p stopped-upgrade-031481                                                                                                                                                                                                                            │ stopped-upgrade-031481       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-641599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-983381                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ no-preload-521770 image list --format=json                                                                                                                                                                                                           │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p no-preload-521770 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:10.314598  796626 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:10.314906  796626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:10.314917  796626 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:10.314923  796626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:10.315255  796626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:10.315874  796626 out.go:368] Setting JSON to false
	I1206 09:53:10.317570  796626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9334,"bootTime":1765005456,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:10.317651  796626 start.go:143] virtualization: kvm guest
	I1206 09:53:10.321620  796626 out.go:179] * [auto-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:10.323587  796626 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:10.323698  796626 notify.go:221] Checking for updates...
	I1206 09:53:10.325764  796626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:10.329609  796626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:10.330739  796626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:10.331787  796626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:10.332975  796626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:10.334901  796626 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:10.335068  796626 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:10.335201  796626 config.go:182] Loaded profile config "no-preload-521770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:53:10.335332  796626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:10.367684  796626 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:10.367791  796626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:10.442629  796626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-06 09:53:10.429605617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:10.442794  796626 docker.go:319] overlay module found
	I1206 09:53:10.444478  796626 out.go:179] * Using the docker driver based on user configuration
	I1206 09:53:10.445531  796626 start.go:309] selected driver: docker
	I1206 09:53:10.445551  796626 start.go:927] validating driver "docker" against <nil>
	I1206 09:53:10.445569  796626 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:10.446412  796626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:10.518950  796626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-06 09:53:10.506081396 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:10.519164  796626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:53:10.519507  796626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:10.522573  796626 out.go:179] * Using Docker driver with root privileges
	I1206 09:53:10.523570  796626 cni.go:84] Creating CNI manager for ""
	I1206 09:53:10.523673  796626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:10.523689  796626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:53:10.523789  796626 start.go:353] cluster config:
	{Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1206 09:53:10.524986  796626 out.go:179] * Starting "auto-983381" primary control-plane node in "auto-983381" cluster
	I1206 09:53:10.526019  796626 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:10.527138  796626 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:10.528368  796626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:10.528411  796626 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:10.528425  796626 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:10.528485  796626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:10.528561  796626 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:10.528579  796626 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:10.528730  796626 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/config.json ...
	I1206 09:53:10.528760  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/config.json: {Name:mk8ffbb4e65ebd7712373ae725b794a8a70e0dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:10.553808  796626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:10.553830  796626 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:10.553852  796626 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:10.553893  796626 start.go:360] acquireMachinesLock for auto-983381: {Name:mkab719bcf4a9828bf3d3e79d20d83abeb871df6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:10.554010  796626 start.go:364] duration metric: took 93.997µs to acquireMachinesLock for "auto-983381"
	I1206 09:53:10.554039  796626 start.go:93] Provisioning new machine with config: &{Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:10.554130  796626 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:53:09.212965  792441 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:09.213030  792441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:09.213102  792441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:53:09.213000  792441 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:53:09.250842  792441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:53:09.256608  792441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:53:09.259598  792441 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:09.259620  792441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:53:09.259672  792441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:53:09.299891  792441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:53:09.384035  792441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:09.409667  792441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:09.409910  792441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-997968" to be "Ready" ...
	I1206 09:53:09.424918  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:53:09.424951  792441 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:53:09.447122  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:53:09.447152  792441 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:53:09.453930  792441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:09.466241  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:53:09.466262  792441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:53:09.486081  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:53:09.486106  792441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:53:09.513420  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:53:09.513450  792441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:53:09.533054  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:53:09.533076  792441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:53:09.549987  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:53:09.550074  792441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:53:09.565345  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:53:09.565369  792441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:53:09.581485  792441 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:53:09.581512  792441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:53:09.602415  792441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:53:10.869484  792441 node_ready.go:49] node "embed-certs-997968" is "Ready"
	I1206 09:53:10.869523  792441 node_ready.go:38] duration metric: took 1.459586599s for node "embed-certs-997968" to be "Ready" ...
	I1206 09:53:10.869543  792441 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:53:10.869603  792441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:53:11.473511  792441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.063755283s)
	I1206 09:53:11.473562  792441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019588987s)
	I1206 09:53:11.473702  792441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871247295s)
	I1206 09:53:11.473748  792441 api_server.go:72] duration metric: took 2.297485271s to wait for apiserver process to appear ...
	I1206 09:53:11.473767  792441 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:53:11.473973  792441 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:53:11.476724  792441 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-997968 addons enable metrics-server
	
	I1206 09:53:11.479668  792441 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:53:11.479693  792441 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:53:11.490923  792441 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1206 09:53:09.996622  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	W1206 09:53:12.496132  782026 pod_ready.go:104] pod "coredns-7d764666f9-mhwh5" is not "Ready", error: <nil>
	I1206 09:53:13.495106  782026 pod_ready.go:94] pod "coredns-7d764666f9-mhwh5" is "Ready"
	I1206 09:53:13.495137  782026 pod_ready.go:86] duration metric: took 39.005759854s for pod "coredns-7d764666f9-mhwh5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.498108  782026 pod_ready.go:83] waiting for pod "etcd-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.502452  782026 pod_ready.go:94] pod "etcd-no-preload-521770" is "Ready"
	I1206 09:53:13.502503  782026 pod_ready.go:86] duration metric: took 4.370843ms for pod "etcd-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.504713  782026 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.509041  782026 pod_ready.go:94] pod "kube-apiserver-no-preload-521770" is "Ready"
	I1206 09:53:13.509064  782026 pod_ready.go:86] duration metric: took 4.32904ms for pod "kube-apiserver-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.510960  782026 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.693730  782026 pod_ready.go:94] pod "kube-controller-manager-no-preload-521770" is "Ready"
	I1206 09:53:13.693763  782026 pod_ready.go:86] duration metric: took 182.77926ms for pod "kube-controller-manager-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:13.894574  782026 pod_ready.go:83] waiting for pod "kube-proxy-t7vrx" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:53:11.699708  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:14.198625  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:14.486992  782026 pod_ready.go:94] pod "kube-proxy-t7vrx" is "Ready"
	I1206 09:53:14.487023  782026 pod_ready.go:86] duration metric: took 592.413178ms for pod "kube-proxy-t7vrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:14.493871  782026 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:14.892551  782026 pod_ready.go:94] pod "kube-scheduler-no-preload-521770" is "Ready"
	I1206 09:53:14.892586  782026 pod_ready.go:86] duration metric: took 398.684783ms for pod "kube-scheduler-no-preload-521770" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:14.892602  782026 pod_ready.go:40] duration metric: took 40.475424909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:14.949257  782026 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:53:15.058922  782026 out.go:179] * Done! kubectl is now configured to use "no-preload-521770" cluster and "default" namespace by default
	I1206 09:53:10.559201  796626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:53:10.559524  796626 start.go:159] libmachine.API.Create for "auto-983381" (driver="docker")
	I1206 09:53:10.559576  796626 client.go:173] LocalClient.Create starting
	I1206 09:53:10.559685  796626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:53:10.559739  796626 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:10.559766  796626 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:10.559841  796626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:53:10.559870  796626 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:10.559900  796626 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:10.560369  796626 cli_runner.go:164] Run: docker network inspect auto-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:53:10.584692  796626 cli_runner.go:211] docker network inspect auto-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:53:10.584811  796626 network_create.go:284] running [docker network inspect auto-983381] to gather additional debugging logs...
	I1206 09:53:10.584842  796626 cli_runner.go:164] Run: docker network inspect auto-983381
	W1206 09:53:10.608435  796626 cli_runner.go:211] docker network inspect auto-983381 returned with exit code 1
	I1206 09:53:10.608493  796626 network_create.go:287] error running [docker network inspect auto-983381]: docker network inspect auto-983381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-983381 not found
	I1206 09:53:10.608521  796626 network_create.go:289] output of [docker network inspect auto-983381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-983381 not found
	
	** /stderr **
	I1206 09:53:10.608634  796626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:10.632296  796626 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:53:10.633370  796626 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:53:10.634241  796626 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:53:10.635187  796626 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d50290}
	I1206 09:53:10.635229  796626 network_create.go:124] attempt to create docker network auto-983381 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1206 09:53:10.635300  796626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-983381 auto-983381
	I1206 09:53:10.716818  796626 network_create.go:108] docker network auto-983381 192.168.76.0/24 created
	I1206 09:53:10.716874  796626 kic.go:121] calculated static IP "192.168.76.2" for the "auto-983381" container
	I1206 09:53:10.716966  796626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:53:10.755068  796626 cli_runner.go:164] Run: docker volume create auto-983381 --label name.minikube.sigs.k8s.io=auto-983381 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:53:10.783172  796626 oci.go:103] Successfully created a docker volume auto-983381
	I1206 09:53:10.783286  796626 cli_runner.go:164] Run: docker run --rm --name auto-983381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-983381 --entrypoint /usr/bin/test -v auto-983381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:53:11.542483  796626 oci.go:107] Successfully prepared a docker volume auto-983381
	I1206 09:53:11.542589  796626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:11.542608  796626 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:53:11.542785  796626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:53:11.494191  792441 addons.go:530] duration metric: took 2.317774385s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:53:11.974112  792441 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:53:11.979033  792441 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:53:11.979069  792441 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:53:12.474620  792441 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:53:12.480642  792441 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 09:53:12.481903  792441 api_server.go:141] control plane version: v1.34.2
	I1206 09:53:12.481937  792441 api_server.go:131] duration metric: took 1.008162753s to wait for apiserver health ...
	I1206 09:53:12.481949  792441 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:53:12.485497  792441 system_pods.go:59] 8 kube-system pods found
	I1206 09:53:12.485552  792441 system_pods.go:61] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:53:12.485567  792441 system_pods.go:61] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:53:12.485576  792441 system_pods.go:61] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:53:12.485595  792441 system_pods.go:61] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:53:12.485604  792441 system_pods.go:61] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:53:12.485611  792441 system_pods.go:61] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:53:12.485619  792441 system_pods.go:61] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:53:12.485624  792441 system_pods.go:61] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Running
	I1206 09:53:12.485632  792441 system_pods.go:74] duration metric: took 3.67554ms to wait for pod list to return data ...
	I1206 09:53:12.485642  792441 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:53:12.488382  792441 default_sa.go:45] found service account: "default"
	I1206 09:53:12.488409  792441 default_sa.go:55] duration metric: took 2.759903ms for default service account to be created ...
	I1206 09:53:12.488419  792441 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:53:12.491886  792441 system_pods.go:86] 8 kube-system pods found
	I1206 09:53:12.491921  792441 system_pods.go:89] "coredns-66bc5c9577-kw8nl" [a588cb47-54de-454f-801b-111a581192ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:53:12.491932  792441 system_pods.go:89] "etcd-embed-certs-997968" [af903a34-7446-4768-93e6-c70e8ce91b7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:53:12.491942  792441 system_pods.go:89] "kindnet-f84xr" [323e6efb-c1dc-4444-a267-62cbeea83a87] Running
	I1206 09:53:12.491950  792441 system_pods.go:89] "kube-apiserver-embed-certs-997968" [f20a3720-527a-49de-8faf-55fbdb709ed2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:53:12.491958  792441 system_pods.go:89] "kube-controller-manager-embed-certs-997968" [7fd2c911-3332-45e0-b09a-45c657e729a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:53:12.491963  792441 system_pods.go:89] "kube-proxy-m2zpr" [69d79892-828c-4f7a-b513-947e20961afe] Running
	I1206 09:53:12.491971  792441 system_pods.go:89] "kube-scheduler-embed-certs-997968" [6cb46b79-b29c-43cf-9be7-7eedc3d0fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:53:12.491983  792441 system_pods.go:89] "storage-provisioner" [9f02a7ce-95cb-4187-936a-e77551b1afb8] Running
	I1206 09:53:12.491993  792441 system_pods.go:126] duration metric: took 3.566416ms to wait for k8s-apps to be running ...
	I1206 09:53:12.492005  792441 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:53:12.492061  792441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:12.509669  792441 system_svc.go:56] duration metric: took 17.650745ms WaitForService to wait for kubelet
	I1206 09:53:12.509700  792441 kubeadm.go:587] duration metric: took 3.333438676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:12.509723  792441 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:53:12.513432  792441 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:53:12.513490  792441 node_conditions.go:123] node cpu capacity is 8
	I1206 09:53:12.513508  792441 node_conditions.go:105] duration metric: took 3.778313ms to run NodePressure ...
	I1206 09:53:12.513526  792441 start.go:242] waiting for startup goroutines ...
	I1206 09:53:12.513535  792441 start.go:247] waiting for cluster config update ...
	I1206 09:53:12.513550  792441 start.go:256] writing updated cluster config ...
	I1206 09:53:12.513876  792441 ssh_runner.go:195] Run: rm -f paused
	I1206 09:53:12.519117  792441 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:12.523919  792441 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:53:14.661295  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:16.698193  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:19.197363  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:16.779749  796626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.236886664s)
	I1206 09:53:16.779790  796626 kic.go:203] duration metric: took 5.237178224s to extract preloaded images to volume ...
	W1206 09:53:16.779905  796626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:53:16.779953  796626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:53:16.780021  796626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:53:16.865941  796626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-983381 --name auto-983381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-983381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-983381 --network auto-983381 --ip 192.168.76.2 --volume auto-983381:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:53:17.268388  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Running}}
	I1206 09:53:17.292606  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:17.319426  796626 cli_runner.go:164] Run: docker exec auto-983381 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:53:17.380406  796626 oci.go:144] the created container "auto-983381" has a running status.
	I1206 09:53:17.380447  796626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa...
	I1206 09:53:17.599095  796626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:53:17.637079  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:17.668664  796626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:53:17.668687  796626 kic_runner.go:114] Args: [docker exec --privileged auto-983381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:53:17.735989  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:17.754909  796626 machine.go:94] provisionDockerMachine start ...
	I1206 09:53:17.755007  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:17.776534  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:17.776925  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:17.776960  796626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:53:17.932625  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-983381
	
	I1206 09:53:17.932658  796626 ubuntu.go:182] provisioning hostname "auto-983381"
	I1206 09:53:17.932733  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:17.956406  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:17.956810  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:17.956833  796626 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-983381 && echo "auto-983381" | sudo tee /etc/hostname
	I1206 09:53:18.117199  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-983381
	
	I1206 09:53:18.117296  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:18.141288  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:18.141665  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:18.141692  796626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-983381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-983381/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-983381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:53:18.287790  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:53:18.287824  796626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:53:18.287859  796626 ubuntu.go:190] setting up certificates
	I1206 09:53:18.287884  796626 provision.go:84] configureAuth start
	I1206 09:53:18.287960  796626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-983381
	I1206 09:53:18.307507  796626 provision.go:143] copyHostCerts
	I1206 09:53:18.307583  796626 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:53:18.307598  796626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:53:18.307835  796626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:53:18.307969  796626 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:53:18.307985  796626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:53:18.308028  796626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:53:18.308109  796626 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:53:18.308120  796626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:53:18.308170  796626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:53:18.308240  796626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.auto-983381 san=[127.0.0.1 192.168.76.2 auto-983381 localhost minikube]
	I1206 09:53:18.580115  796626 provision.go:177] copyRemoteCerts
	I1206 09:53:18.580184  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:53:18.580241  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:18.605051  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:18.710847  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:53:18.736421  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1206 09:53:18.760005  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:53:18.784425  796626 provision.go:87] duration metric: took 496.52003ms to configureAuth
	I1206 09:53:18.784470  796626 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:53:18.784690  796626 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:18.784827  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:18.807839  796626 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:18.808149  796626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33231 <nil> <nil>}
	I1206 09:53:18.808177  796626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:53:19.100886  796626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:53:19.100934  796626 machine.go:97] duration metric: took 1.346001172s to provisionDockerMachine
	I1206 09:53:19.100952  796626 client.go:176] duration metric: took 8.541367609s to LocalClient.Create
	I1206 09:53:19.100980  796626 start.go:167] duration metric: took 8.541459534s to libmachine.API.Create "auto-983381"
	I1206 09:53:19.100997  796626 start.go:293] postStartSetup for "auto-983381" (driver="docker")
	I1206 09:53:19.101014  796626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:53:19.101097  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:53:19.101153  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.121148  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.220339  796626 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:53:19.224805  796626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:53:19.224843  796626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:53:19.224857  796626 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:53:19.224921  796626 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:53:19.225039  796626 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:53:19.225178  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:53:19.233348  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:19.253961  796626 start.go:296] duration metric: took 152.924749ms for postStartSetup
	I1206 09:53:19.254276  796626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-983381
	I1206 09:53:19.271744  796626 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/config.json ...
	I1206 09:53:19.272001  796626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:53:19.272051  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.289588  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.388709  796626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:53:19.393578  796626 start.go:128] duration metric: took 8.839434366s to createHost
	I1206 09:53:19.393601  796626 start.go:83] releasing machines lock for "auto-983381", held for 8.839578656s
	I1206 09:53:19.393687  796626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-983381
	I1206 09:53:19.411069  796626 ssh_runner.go:195] Run: cat /version.json
	I1206 09:53:19.411131  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.411137  796626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:53:19.411213  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:19.430385  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.430781  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:19.574328  796626 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:19.581326  796626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:53:19.617087  796626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:53:19.622054  796626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:53:19.622144  796626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:53:19.648072  796626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:53:19.648095  796626 start.go:496] detecting cgroup driver to use...
	I1206 09:53:19.648131  796626 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:53:19.648185  796626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:53:19.664452  796626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:53:19.676559  796626 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:53:19.676609  796626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:53:19.692657  796626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:53:19.712532  796626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:53:19.795865  796626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:53:19.885579  796626 docker.go:234] disabling docker service ...
	I1206 09:53:19.885656  796626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:53:19.906614  796626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:53:19.921011  796626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:53:20.009418  796626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:53:20.090829  796626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:53:20.103872  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:53:20.119288  796626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:53:20.119354  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.130546  796626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:53:20.130611  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.140002  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.149198  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.158672  796626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:53:20.167366  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.176677  796626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.190832  796626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:20.200879  796626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:53:20.209137  796626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:53:20.216989  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:20.309704  796626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:53:20.832598  796626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:53:20.832674  796626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:53:20.838156  796626 start.go:564] Will wait 60s for crictl version
	I1206 09:53:20.838218  796626 ssh_runner.go:195] Run: which crictl
	I1206 09:53:20.843152  796626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:53:20.878090  796626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:53:20.878179  796626 ssh_runner.go:195] Run: crio --version
	I1206 09:53:20.918917  796626 ssh_runner.go:195] Run: crio --version
	I1206 09:53:20.960624  796626 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:53:17.029902  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:19.030595  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:21.031656  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:20.962027  796626 cli_runner.go:164] Run: docker network inspect auto-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:20.984534  796626 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:20.989870  796626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:21.003644  796626 kubeadm.go:884] updating cluster {Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:21.003807  796626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:21.003897  796626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:21.050155  796626 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:21.050182  796626 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:21.050244  796626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:21.084927  796626 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:21.084957  796626 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:21.084966  796626 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1206 09:53:21.085103  796626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-983381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:53:21.085196  796626 ssh_runner.go:195] Run: crio config
	I1206 09:53:21.147879  796626 cni.go:84] Creating CNI manager for ""
	I1206 09:53:21.147920  796626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:53:21.147945  796626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:21.147977  796626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-983381 NodeName:auto-983381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:21.148167  796626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-983381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:21.148255  796626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:21.159057  796626 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:21.159127  796626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:21.169519  796626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1206 09:53:21.188281  796626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:21.208263  796626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1206 09:53:21.225796  796626 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:21.230699  796626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:21.244127  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:21.355322  796626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:21.387784  796626 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381 for IP: 192.168.76.2
	I1206 09:53:21.387808  796626 certs.go:195] generating shared ca certs ...
	I1206 09:53:21.387830  796626 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.388011  796626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:21.388094  796626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:21.388117  796626 certs.go:257] generating profile certs ...
	I1206 09:53:21.388199  796626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.key
	I1206 09:53:21.388220  796626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.crt with IP's: []
	I1206 09:53:21.527779  796626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.crt ...
	I1206 09:53:21.527813  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.crt: {Name:mk2ad50101add546ad51db7e44569749ea1a5f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.528042  796626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.key ...
	I1206 09:53:21.528065  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/client.key: {Name:mk6b957f3fb4700b5610e36b0ef33f28d90a8bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.528199  796626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02
	I1206 09:53:21.528221  796626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1206 09:53:21.559539  796626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02 ...
	I1206 09:53:21.559571  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02: {Name:mkfde50eb4629d079ca9f921db7a290f4181d6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.559774  796626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02 ...
	I1206 09:53:21.559795  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02: {Name:mkfb936c148fef11e349055de9ebaf61d63b8a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.559951  796626 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt.2ee95c02 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt
	I1206 09:53:21.560085  796626 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key.2ee95c02 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key
	I1206 09:53:21.560199  796626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key
	I1206 09:53:21.560231  796626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt with IP's: []
	I1206 09:53:21.715899  796626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt ...
	I1206 09:53:21.715932  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt: {Name:mkdc8ba936c68350260908df68becc7b02551b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.716144  796626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key ...
	I1206 09:53:21.716169  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key: {Name:mk536bcfaff43bc1e0f5d3922abe12261c37b246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:21.716380  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:21.716421  796626 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:21.716432  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:21.716468  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:21.716493  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:21.716520  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:21.716592  796626 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:21.717258  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:21.741669  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:21.764916  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:21.787222  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:21.810264  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1206 09:53:21.830290  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:21.851361  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:21.874437  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/auto-983381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:53:21.897806  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:21.925127  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:21.949014  796626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:21.974019  796626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:21.991183  796626 ssh_runner.go:195] Run: openssl version
	I1206 09:53:21.999540  796626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.010959  796626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:22.021880  796626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.027590  796626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.027666  796626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:22.089285  796626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:22.100350  796626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:53:22.111529  796626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.122127  796626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:22.132733  796626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.137587  796626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.137649  796626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:22.195795  796626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:22.207559  796626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:22.218889  796626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.230203  796626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:22.241028  796626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.246203  796626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.246293  796626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:22.304669  796626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:22.315409  796626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:53:22.327336  796626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:22.332786  796626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:53:22.332859  796626 kubeadm.go:401] StartCluster: {Name:auto-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:22.332963  796626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:22.333021  796626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:22.373356  796626 cri.go:89] found id: ""
	I1206 09:53:22.373434  796626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:22.384705  796626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:53:22.396467  796626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:53:22.396542  796626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:53:22.408067  796626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:53:22.408090  796626 kubeadm.go:158] found existing configuration files:
	
	I1206 09:53:22.408140  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:53:22.419357  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:53:22.419426  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:53:22.430093  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:53:22.440311  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:53:22.440380  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:53:22.450909  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:53:22.461671  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:53:22.461736  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:53:22.472006  796626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:53:22.482874  796626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:53:22.482953  796626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:53:22.494029  796626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:53:22.548000  796626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:53:22.548072  796626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:53:22.576658  796626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:53:22.576787  796626 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:53:22.576875  796626 kubeadm.go:319] OS: Linux
	I1206 09:53:22.576957  796626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:53:22.577029  796626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:53:22.577099  796626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:53:22.577169  796626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:53:22.577232  796626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:53:22.577310  796626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:53:22.577378  796626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:53:22.577437  796626 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:53:22.657494  796626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:53:22.657632  796626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:53:22.657791  796626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:53:22.666745  796626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1206 09:53:21.198540  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:23.698589  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:22.669604  796626 out.go:252]   - Generating certificates and keys ...
	I1206 09:53:22.669731  796626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:53:22.669850  796626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:53:23.040095  796626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:53:23.215338  796626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:53:23.459338  796626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:53:23.932487  796626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:53:24.081617  796626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:53:24.081768  796626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-983381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:53:24.293675  796626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:53:24.293805  796626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-983381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:53:24.578599  796626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:53:24.665045  796626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:53:24.756902  796626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:53:24.757051  796626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1206 09:53:23.530910  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:26.030073  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:25.443500  796626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:53:25.604502  796626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:53:25.767791  796626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:53:25.882784  796626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:53:26.601886  796626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:53:26.602425  796626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:53:26.605938  796626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1206 09:53:26.198679  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:28.699789  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:26.607286  796626 out.go:252]   - Booting up control plane ...
	I1206 09:53:26.607373  796626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:53:26.607434  796626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:53:26.608192  796626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:53:26.622664  796626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:53:26.622799  796626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:53:26.629266  796626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:53:26.629571  796626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:53:26.629646  796626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:53:26.745573  796626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:53:26.745765  796626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:53:27.746318  796626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000937883s
	I1206 09:53:27.752177  796626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:53:27.752310  796626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:53:27.752443  796626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:53:27.752611  796626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:53:29.300961  796626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.548774959s
	
	
	==> CRI-O <==
	Dec 06 09:52:55 no-preload-521770 crio[569]: time="2025-12-06T09:52:55.794271333Z" level=info msg="Started container" PID=1766 containerID=f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper id=6ce0e8f7-c6cc-4f2c-ba74-f58dd964c5fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=02e6cc91bd6818ff962808df3d688db5f5f5a46b6cc5ae0d9e90ff2ee592b007
	Dec 06 09:52:55 no-preload-521770 crio[569]: time="2025-12-06T09:52:55.839550997Z" level=info msg="Removing container: 6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c" id=92ad8357-ef05-46cf-88dd-cbdbe4a6bb49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:52:55 no-preload-521770 crio[569]: time="2025-12-06T09:52:55.849450172Z" level=info msg="Removed container 6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=92ad8357-ef05-46cf-88dd-cbdbe4a6bb49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.86480105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=31107f80-f18a-4f05-b7ac-a0cad986d42b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.865859929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3d23702f-b45a-4fe7-a6f4-fb000888595c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.866937033Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0d7151bd-84b4-4454-92b5-c7f3b4bfa938 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.86708298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871445121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871676606Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b6a642ad33efe87b54c04dbc7f33290f95d008640c794a5adf9132192c1e458c/merged/etc/passwd: no such file or directory"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871710845Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b6a642ad33efe87b54c04dbc7f33290f95d008640c794a5adf9132192c1e458c/merged/etc/group: no such file or directory"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.871979356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.907082579Z" level=info msg="Created container 2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94: kube-system/storage-provisioner/storage-provisioner" id=0d7151bd-84b4-4454-92b5-c7f3b4bfa938 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.907804733Z" level=info msg="Starting container: 2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94" id=745f1026-5c5e-486c-9296-2aa73cf088d5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:04 no-preload-521770 crio[569]: time="2025-12-06T09:53:04.9097309Z" level=info msg="Started container" PID=1780 containerID=2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94 description=kube-system/storage-provisioner/storage-provisioner id=745f1026-5c5e-486c-9296-2aa73cf088d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1788d42dc35110d0c49217fe283fa722a77c6c3c4b692d6e137ae061ee6770e
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.741429486Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=13fb6038-40c8-47a1-9013-589743508d13 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.745694219Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4d39913e-42c8-4928-80c8-e0f8819e5371 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.746932122Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=351709c0-514d-43be-a4ab-7faabfc43fa9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.747091831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.763632436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.764314033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.809841785Z" level=info msg="Created container a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=351709c0-514d-43be-a4ab-7faabfc43fa9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.811743228Z" level=info msg="Starting container: a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845" id=64ca8d0f-02c3-4a6d-b9db-14ffdf0bbe4f name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.814945355Z" level=info msg="Started container" PID=1817 containerID=a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper id=64ca8d0f-02c3-4a6d-b9db-14ffdf0bbe4f name=/runtime.v1.RuntimeService/StartContainer sandboxID=02e6cc91bd6818ff962808df3d688db5f5f5a46b6cc5ae0d9e90ff2ee592b007
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.908268241Z" level=info msg="Removing container: f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f" id=10faafc1-fca5-43c9-a320-27187c01ddae name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:16 no-preload-521770 crio[569]: time="2025-12-06T09:53:16.954352192Z" level=info msg="Removed container f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf/dashboard-metrics-scraper" id=10faafc1-fca5-43c9-a320-27187c01ddae name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a30f79b67fa58       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   3                   02e6cc91bd681       dashboard-metrics-scraper-867fb5f87b-lhdkf   kubernetes-dashboard
	2d35003201fe1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   f1788d42dc351       storage-provisioner                          kube-system
	e99a0c409aee4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   92d59e46022fc       kubernetes-dashboard-b84665fb8-sd5kj         kubernetes-dashboard
	e95c1b791a64e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           57 seconds ago       Running             coredns                     0                   dbea96a64c69b       coredns-7d764666f9-mhwh5                     kube-system
	34879bfa0a35c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   b4e470b64e1de       busybox                                      default
	8ed2e44ffcad0       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           57 seconds ago       Running             kube-proxy                  0                   1e9c368bf4708       kube-proxy-t7vrx                             kube-system
	df87b3fa3a4a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   f1788d42dc351       storage-provisioner                          kube-system
	7e7db9271d279       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   8e13eb1a5ae20       kindnet-2w8b5                                kube-system
	9dc873b13be2d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           About a minute ago   Running             kube-scheduler              0                   54f030d1eb746       kube-scheduler-no-preload-521770             kube-system
	4740c81bbda6e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           About a minute ago   Running             kube-controller-manager     0                   535d045965bb3       kube-controller-manager-no-preload-521770    kube-system
	1180b54a98400       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   913c92d938e9a       etcd-no-preload-521770                       kube-system
	585f10915444a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           About a minute ago   Running             kube-apiserver              0                   80b34865a5557       kube-apiserver-no-preload-521770             kube-system
	
	
	==> coredns [e95c1b791a64eddfbcbb348c3a235e0708db18cf4a7f64bb9a7fff385ba3c65f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38202 - 27723 "HINFO IN 7691446011699020337.1410784681661014434. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025257086s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-521770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-521770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=no-preload-521770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_51_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:51:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-521770
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:53:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:53:03 +0000   Sat, 06 Dec 2025 09:51:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-521770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                77a79082-49d3-48ca-89e8-de80a1e12164
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-7d764666f9-mhwh5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-521770                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-2w8b5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-521770              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-521770     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-t7vrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-521770              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-lhdkf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-sd5kj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  115s  node-controller  Node no-preload-521770 event: Registered Node no-preload-521770 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-521770 event: Registered Node no-preload-521770 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [1180b54a98400f332dbb4dda677c01fc02e3c44f901938b0567810c83d6df692] <==
	{"level":"warn","ts":"2025-12-06T09:52:32.119861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.128718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.136268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.143278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.150473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.156959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.163637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.170350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.180601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.186864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.193398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.199916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.206511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.220820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.227540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.234427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.241545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.248336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.260368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.267235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.275653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.283415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:52:32.339734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:14.485528Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.604105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-521770\" limit:1 ","response":"range_response_count:1 size:5270"}
	{"level":"info","ts":"2025-12-06T09:53:14.485636Z","caller":"traceutil/trace.go:172","msg":"trace[1154857655] range","detail":"{range_begin:/registry/minions/no-preload-521770; range_end:; response_count:1; response_revision:676; }","duration":"193.730067ms","start":"2025-12-06T09:53:14.291890Z","end":"2025-12-06T09:53:14.485620Z","steps":["trace[1154857655] 'range keys from in-memory index tree'  (duration: 193.443792ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:53:31 up  2:35,  0 user,  load average: 5.35, 3.46, 3.41
	Linux no-preload-521770 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e7db9271d27970d4ad67bfb8b35bb164eefe0492d4f17948f191a67d54e12bf] <==
	I1206 09:52:34.364864       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:52:34.365158       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:52:34.365322       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:52:34.365350       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:52:34.365377       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:52:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:52:34.569489       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:52:34.569539       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:52:34.569622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:52:34.570081       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:52:34.869712       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:52:34.869743       1 metrics.go:72] Registering metrics
	I1206 09:52:34.869798       1 controller.go:711] "Syncing nftables rules"
	I1206 09:52:44.569987       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:52:44.570059       1 main.go:301] handling current node
	I1206 09:52:54.573553       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:52:54.573585       1 main.go:301] handling current node
	I1206 09:53:04.569638       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:53:04.569693       1 main.go:301] handling current node
	I1206 09:53:14.569747       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:53:14.569793       1 main.go:301] handling current node
	I1206 09:53:24.571631       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:53:24.571669       1 main.go:301] handling current node
	
	
	==> kube-apiserver [585f10915444acd7acfdddbe9415b18fc4bb7c9d1e5009ad15a8bf10a9129068] <==
	I1206 09:52:32.841898       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.844171       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:52:32.844203       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:52:32.845130       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:52:32.845564       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:52:32.855650       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:52:32.869013       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:52:32.874628       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.874653       1 policy_source.go:248] refreshing policies
	I1206 09:52:32.935310       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:32.936302       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:52:32.936326       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:52:32.936600       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:52:33.146555       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:52:33.175478       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:52:33.193607       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:52:33.202616       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:52:33.209992       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:52:33.244699       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.50.12"}
	I1206 09:52:33.255873       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.80.194"}
	I1206 09:52:33.751134       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:52:36.453060       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:52:36.453122       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:52:36.552233       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:52:36.702163       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4740c81bbda6eb396add856fa79e529e77045345b6b8aafa409f0c035427e3e5] <==
	I1206 09:52:36.006837       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006879       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006947       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006961       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006625       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.006655       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007229       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007250       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007271       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007296       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007348       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:52:36.007406       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:52:36.007414       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:36.007419       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.007974       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.009204       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.009341       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.009751       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.011745       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.020310       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:36.105940       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:36.105962       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:52:36.105969       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:52:36.120950       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8ed2e44ffcad005a51eaa2da515c456cbde31d1c0f0f8025b9411ffda44f5ff8] <==
	I1206 09:52:34.184746       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:52:34.256785       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:52:34.357524       1 shared_informer.go:377] "Caches are synced"
	I1206 09:52:34.357698       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:52:34.357824       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:52:34.380396       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:52:34.380469       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:52:34.387136       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:52:34.387511       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:52:34.387527       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:52:34.389420       1 config.go:200] "Starting service config controller"
	I1206 09:52:34.390554       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:52:34.389919       1 config.go:309] "Starting node config controller"
	I1206 09:52:34.390687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:52:34.390708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:52:34.389912       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:52:34.390718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:52:34.389941       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:52:34.390736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:52:34.490840       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:52:34.490851       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:52:34.490879       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9dc873b13be2daef40a2751e9c41eeada071f9d2a36935447fdcf8f69e38bcb0] <==
	E1206 09:52:32.852165       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.852927       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1206 09:52:32.852912       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.854172       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1206 09:52:32.854303       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:32.854396       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.854524       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1206 09:52:32.854679       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1206 09:52:32.854696       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1206 09:52:32.855116       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.855227       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:52:32.855287       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:32.855508       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:52:32.858284       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:52:32.858579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1206 09:52:32.858735       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1206 09:52:32.858847       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:52:32.858918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1206 09:52:32.858979       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:52:32.859519       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1206 09:52:32.860204       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:52:32.860977       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1206 09:52:32.861107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1206 09:52:32.859610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1206 09:52:32.949820       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:52:50 no-preload-521770 kubelet[722]: E1206 09:52:50.318858     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: E1206 09:52:55.741429     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: I1206 09:52:55.741493     722 scope.go:122] "RemoveContainer" containerID="6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: I1206 09:52:55.838282     722 scope.go:122] "RemoveContainer" containerID="6d8adbbfbceb854ef17233697e783e687b0e2f5a41f9ffcf9fad4e00194e777c"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: E1206 09:52:55.838538     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: I1206 09:52:55.838573     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:52:55 no-preload-521770 kubelet[722]: E1206 09:52:55.838733     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:00 no-preload-521770 kubelet[722]: E1206 09:53:00.318797     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:00 no-preload-521770 kubelet[722]: I1206 09:53:00.318833     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:53:00 no-preload-521770 kubelet[722]: E1206 09:53:00.319019     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:04 no-preload-521770 kubelet[722]: I1206 09:53:04.864291     722 scope.go:122] "RemoveContainer" containerID="df87b3fa3a4a208955c1a48e6d46a19a5567b0311b97242991aea76fc0d6487e"
	Dec 06 09:53:13 no-preload-521770 kubelet[722]: E1206 09:53:13.404543     722 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mhwh5" containerName="coredns"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: E1206 09:53:16.740842     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: I1206 09:53:16.740887     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: I1206 09:53:16.905163     722 scope.go:122] "RemoveContainer" containerID="f27c94741d9d1d0b5a18a44c084ff040ec6c72ee81e951c0138357890ca1d06f"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: E1206 09:53:16.905550     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: I1206 09:53:16.905582     722 scope.go:122] "RemoveContainer" containerID="a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845"
	Dec 06 09:53:16 no-preload-521770 kubelet[722]: E1206 09:53:16.905811     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:20 no-preload-521770 kubelet[722]: E1206 09:53:20.319285     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" containerName="dashboard-metrics-scraper"
	Dec 06 09:53:20 no-preload-521770 kubelet[722]: I1206 09:53:20.319339     722 scope.go:122] "RemoveContainer" containerID="a30f79b67fa580494363921180fbefbe2968742cac103e4bba9789bcf1771845"
	Dec 06 09:53:20 no-preload-521770 kubelet[722]: E1206 09:53:20.319562     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lhdkf_kubernetes-dashboard(00048e12-3d2d-40a4-bfc5-86f6355717f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lhdkf" podUID="00048e12-3d2d-40a4-bfc5-86f6355717f0"
	Dec 06 09:53:27 no-preload-521770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:27 no-preload-521770 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:27 no-preload-521770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:53:27 no-preload-521770 systemd[1]: kubelet.service: Consumed 1.891s CPU time.
	
	
	==> kubernetes-dashboard [e99a0c409aee494e84e9717b2c81fbd6716d787c2a6936c23c10595e6f8dc302] <==
	2025/12/06 09:52:40 Using namespace: kubernetes-dashboard
	2025/12/06 09:52:40 Using in-cluster config to connect to apiserver
	2025/12/06 09:52:40 Using secret token for csrf signing
	2025/12/06 09:52:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:52:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:52:40 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/06 09:52:40 Generating JWE encryption key
	2025/12/06 09:52:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:52:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:52:41 Initializing JWE encryption key from synchronized object
	2025/12/06 09:52:41 Creating in-cluster Sidecar client
	2025/12/06 09:52:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:52:41 Serving insecurely on HTTP port: 9090
	2025/12/06 09:53:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:52:40 Starting overwatch
	
	
	==> storage-provisioner [2d35003201fe16d420eeabcc215eddf55829c5afe83dc55f234e8b6334ec7d94] <==
	I1206 09:53:04.922835       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:53:04.930609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:53:04.930664       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:53:04.933611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:08.389383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:12.650839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:16.249018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:19.303357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:22.326335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:22.332666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:22.332876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:53:22.333062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-521770_dd6d35eb-aa45-4f94-98b4-1f2affa87112!
	I1206 09:53:22.333327       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61a34a7f-5161-40bf-8cdb-f26ed1163acf", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-521770_dd6d35eb-aa45-4f94-98b4-1f2affa87112 became leader
	W1206 09:53:22.341814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:22.345254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:22.433788       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-521770_dd6d35eb-aa45-4f94-98b4-1f2affa87112!
	W1206 09:53:24.348672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:24.353824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:26.357414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:26.361224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:28.364391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:28.417517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:30.420704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:30.425649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [df87b3fa3a4a208955c1a48e6d46a19a5567b0311b97242991aea76fc0d6487e] <==
	I1206 09:52:34.128960       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:53:04.137907       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521770 -n no-preload-521770
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521770 -n no-preload-521770: exit status 2 (338.477437ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-521770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-759696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-759696 --alsologtostderr -v=1: exit status 80 (1.952943698s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-759696 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:53:55.879805  805226 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:55.880114  805226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:55.880123  805226 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:55.880128  805226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:55.880352  805226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:55.880613  805226 out.go:368] Setting JSON to false
	I1206 09:53:55.880633  805226 mustload.go:66] Loading cluster: default-k8s-diff-port-759696
	I1206 09:53:55.880990  805226 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:55.881377  805226 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759696 --format={{.State.Status}}
	I1206 09:53:55.900128  805226 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:55.900527  805226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:55.966900  805226 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-06 09:53:55.956340603 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:55.967854  805226 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-759696 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:53:55.973598  805226 out.go:179] * Pausing node default-k8s-diff-port-759696 ... 
	I1206 09:53:55.974782  805226 host.go:66] Checking if "default-k8s-diff-port-759696" exists ...
	I1206 09:53:55.975122  805226 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:55.975176  805226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759696
	I1206 09:53:55.997215  805226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33221 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/default-k8s-diff-port-759696/id_rsa Username:docker}
	I1206 09:53:56.096295  805226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:56.129544  805226 pause.go:52] kubelet running: true
	I1206 09:53:56.129621  805226 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:56.347388  805226 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:56.347507  805226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:56.424674  805226 cri.go:89] found id: "fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a"
	I1206 09:53:56.424700  805226 cri.go:89] found id: "0cfc5bf6ac0e2acc0bfc6a44706294eb6472b9a0b6da79d346ae3dfa437729de"
	I1206 09:53:56.424706  805226 cri.go:89] found id: "0fd27514e3b63af2ee66e57c5e0d3db6ac0f18efcf343dbefc6b2f2e256584f0"
	I1206 09:53:56.424711  805226 cri.go:89] found id: "1289e6d7da285692d4fa714fc6797eeba4ead826886d680935fba4c4461f6875"
	I1206 09:53:56.424715  805226 cri.go:89] found id: "47b3688c94fe5f8791be5571032439e7f24a58e707a037c30b8f448c060aafe2"
	I1206 09:53:56.424720  805226 cri.go:89] found id: "49d5db0bf8c817844e681d0c272f78bea45bd7a69be93dbd6b87ce00764c41c3"
	I1206 09:53:56.424724  805226 cri.go:89] found id: "2b4e13927c1dd98b75c5d83e4aec397dc2e4749caaf7821cfac821811b1d3da7"
	I1206 09:53:56.424729  805226 cri.go:89] found id: "96bf17c21fc5ef4c1b3dca26666987c3ead355280a820de4ef784becde9de15b"
	I1206 09:53:56.424733  805226 cri.go:89] found id: "5081ea10eaf550a1552364d04b9716dd633af5964fac9bc876f2cc1e5ca71b16"
	I1206 09:53:56.424747  805226 cri.go:89] found id: "37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	I1206 09:53:56.424758  805226 cri.go:89] found id: "8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b"
	I1206 09:53:56.424762  805226 cri.go:89] found id: ""
	I1206 09:53:56.424815  805226 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:56.438157  805226 retry.go:31] will retry after 268.420871ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:56Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:56.707658  805226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:56.722907  805226 pause.go:52] kubelet running: false
	I1206 09:53:56.722964  805226 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:56.927581  805226 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:56.927662  805226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:57.000476  805226 cri.go:89] found id: "fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a"
	I1206 09:53:57.000506  805226 cri.go:89] found id: "0cfc5bf6ac0e2acc0bfc6a44706294eb6472b9a0b6da79d346ae3dfa437729de"
	I1206 09:53:57.000513  805226 cri.go:89] found id: "0fd27514e3b63af2ee66e57c5e0d3db6ac0f18efcf343dbefc6b2f2e256584f0"
	I1206 09:53:57.000518  805226 cri.go:89] found id: "1289e6d7da285692d4fa714fc6797eeba4ead826886d680935fba4c4461f6875"
	I1206 09:53:57.000522  805226 cri.go:89] found id: "47b3688c94fe5f8791be5571032439e7f24a58e707a037c30b8f448c060aafe2"
	I1206 09:53:57.000529  805226 cri.go:89] found id: "49d5db0bf8c817844e681d0c272f78bea45bd7a69be93dbd6b87ce00764c41c3"
	I1206 09:53:57.000533  805226 cri.go:89] found id: "2b4e13927c1dd98b75c5d83e4aec397dc2e4749caaf7821cfac821811b1d3da7"
	I1206 09:53:57.000538  805226 cri.go:89] found id: "96bf17c21fc5ef4c1b3dca26666987c3ead355280a820de4ef784becde9de15b"
	I1206 09:53:57.000543  805226 cri.go:89] found id: "5081ea10eaf550a1552364d04b9716dd633af5964fac9bc876f2cc1e5ca71b16"
	I1206 09:53:57.000564  805226 cri.go:89] found id: "37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	I1206 09:53:57.000571  805226 cri.go:89] found id: "8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b"
	I1206 09:53:57.000574  805226 cri.go:89] found id: ""
	I1206 09:53:57.000639  805226 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:57.014126  805226 retry.go:31] will retry after 490.32826ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:57Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:57.504570  805226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:57.517120  805226 pause.go:52] kubelet running: false
	I1206 09:53:57.517183  805226 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:57.674293  805226 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:57.674382  805226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:57.747419  805226 cri.go:89] found id: "fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a"
	I1206 09:53:57.747451  805226 cri.go:89] found id: "0cfc5bf6ac0e2acc0bfc6a44706294eb6472b9a0b6da79d346ae3dfa437729de"
	I1206 09:53:57.747492  805226 cri.go:89] found id: "0fd27514e3b63af2ee66e57c5e0d3db6ac0f18efcf343dbefc6b2f2e256584f0"
	I1206 09:53:57.747510  805226 cri.go:89] found id: "1289e6d7da285692d4fa714fc6797eeba4ead826886d680935fba4c4461f6875"
	I1206 09:53:57.747513  805226 cri.go:89] found id: "47b3688c94fe5f8791be5571032439e7f24a58e707a037c30b8f448c060aafe2"
	I1206 09:53:57.747517  805226 cri.go:89] found id: "49d5db0bf8c817844e681d0c272f78bea45bd7a69be93dbd6b87ce00764c41c3"
	I1206 09:53:57.747520  805226 cri.go:89] found id: "2b4e13927c1dd98b75c5d83e4aec397dc2e4749caaf7821cfac821811b1d3da7"
	I1206 09:53:57.747523  805226 cri.go:89] found id: "96bf17c21fc5ef4c1b3dca26666987c3ead355280a820de4ef784becde9de15b"
	I1206 09:53:57.747531  805226 cri.go:89] found id: "5081ea10eaf550a1552364d04b9716dd633af5964fac9bc876f2cc1e5ca71b16"
	I1206 09:53:57.747538  805226 cri.go:89] found id: "37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	I1206 09:53:57.747545  805226 cri.go:89] found id: "8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b"
	I1206 09:53:57.747547  805226 cri.go:89] found id: ""
	I1206 09:53:57.747590  805226 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:57.762469  805226 out.go:203] 
	W1206 09:53:57.763615  805226 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:53:57.763633  805226 out.go:285] * 
	* 
	W1206 09:53:57.768731  805226 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:53:57.769835  805226 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-759696 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-759696
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-759696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87",
	        "Created": "2025-12-06T09:51:52.674641004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 789836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:54.636676495Z",
	            "FinishedAt": "2025-12-06T09:52:53.780352741Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/hosts",
	        "LogPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87-json.log",
	        "Name": "/default-k8s-diff-port-759696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-759696:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-759696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87",
	                "LowerDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-759696",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-759696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-759696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-759696",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-759696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "44caf01710ea177df544b1306f7cc3fd12de4a88f3a26cf23bef177a9ef6402a",
	            "SandboxKey": "/var/run/docker/netns/44caf01710ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-759696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8eafe0b310a8d3a7cc2c2f8b223b86754d5d6f80cb6837e1258939016171b84",
	                    "EndpointID": "9e1bfdf35fd6bc6251ac8252d9247e2e6afb82f8f94e959e82322f65797aed10",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6a:07:11:62:8a:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-759696",
	                        "7e15a5997079"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696: exit status 2 (330.471654ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759696 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-759696 logs -n 25: (1.17775085s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-983381                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ no-preload-521770 image list --format=json                                                                                                                                                                                                           │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p no-preload-521770 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p kindnet-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-983381               │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ default-k8s-diff-port-759696 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p default-k8s-diff-port-759696 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ embed-certs-997968 image list --format=json                                                                                                                                                                                                          │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-997968 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:35.736114  802078 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:35.736358  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736366  802078 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:35.736370  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736608  802078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:35.737088  802078 out.go:368] Setting JSON to false
	I1206 09:53:35.738323  802078 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9360,"bootTime":1765005456,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:35.738388  802078 start.go:143] virtualization: kvm guest
	I1206 09:53:35.740317  802078 out.go:179] * [kindnet-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:35.741422  802078 notify.go:221] Checking for updates...
	I1206 09:53:35.741506  802078 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:35.742495  802078 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:35.743616  802078 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:35.744630  802078 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:35.745749  802078 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:35.746924  802078 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:35.748304  802078 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748393  802078 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748491  802078 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748589  802078 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:35.772982  802078 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:35.773088  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.830680  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.820532325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.830809  802078 docker.go:319] overlay module found
	I1206 09:53:35.832481  802078 out.go:179] * Using the docker driver based on user configuration
	I1206 09:53:35.833543  802078 start.go:309] selected driver: docker
	I1206 09:53:35.833558  802078 start.go:927] validating driver "docker" against <nil>
	I1206 09:53:35.833571  802078 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:35.834109  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.894209  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.883098075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.894359  802078 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:53:35.894710  802078 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:35.896340  802078 out.go:179] * Using Docker driver with root privileges
	I1206 09:53:35.897286  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:35.897302  802078 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:53:35.897380  802078 start.go:353] cluster config:
	{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:35.898495  802078 out.go:179] * Starting "kindnet-983381" primary control-plane node in "kindnet-983381" cluster
	I1206 09:53:35.899494  802078 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:35.900543  802078 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:35.901765  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:35.901802  802078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:35.901815  802078 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:35.901854  802078 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:35.901908  802078 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:35.901922  802078 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:35.902031  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:35.902059  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json: {Name:mk3a79de74bde68ec31b151eacb622c73b38daf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:35.924146  802078 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:35.924170  802078 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:35.924185  802078 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:35.924214  802078 start.go:360] acquireMachinesLock for kindnet-983381: {Name:mk6e4785105686f4f72d41f8081d2646bcdec596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:35.924309  802078 start.go:364] duration metric: took 76.057µs to acquireMachinesLock for "kindnet-983381"
	I1206 09:53:35.924331  802078 start.go:93] Provisioning new machine with config: &{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:35.924423  802078 start.go:125] createHost starting for "" (driver="docker")
	W1206 09:53:33.030174  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:35.530789  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:35.464753  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:35.964668  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.464301  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.964782  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.464682  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.965001  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.464290  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.545704  796626 kubeadm.go:1114] duration metric: took 4.655385618s to wait for elevateKubeSystemPrivileges
	I1206 09:53:38.545746  796626 kubeadm.go:403] duration metric: took 16.212898927s to StartCluster
	I1206 09:53:38.545772  796626 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.545859  796626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:38.548341  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.548970  796626 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:38.548998  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:53:38.549213  796626 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:38.549132  796626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:53:38.549388  796626 addons.go:70] Setting storage-provisioner=true in profile "auto-983381"
	I1206 09:53:38.549417  796626 addons.go:239] Setting addon storage-provisioner=true in "auto-983381"
	I1206 09:53:38.549483  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.550037  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.549392  796626 addons.go:70] Setting default-storageclass=true in profile "auto-983381"
	I1206 09:53:38.550507  796626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-983381"
	I1206 09:53:38.550833  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.550926  796626 out.go:179] * Verifying Kubernetes components...
	I1206 09:53:38.553410  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:38.588099  796626 addons.go:239] Setting addon default-storageclass=true in "auto-983381"
	I1206 09:53:38.588150  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.588574  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.617638  796626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.617665  796626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:53:38.617866  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.626643  796626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1206 09:53:35.198662  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:37.200822  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:38.638810  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.668107  796626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:38.668132  796626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:38.668195  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.675658  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:53:38.692832  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.702368  796626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:38.748795  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.801277  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:39.027533  796626 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:53:39.029182  796626 node_ready.go:35] waiting up to 15m0s for node "auto-983381" to be "Ready" ...
	I1206 09:53:39.654289  796626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-983381" context rescaled to 1 replicas
	I1206 09:53:40.345178  796626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.543850325s)
	I1206 09:53:40.346941  796626 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1206 09:53:35.925937  802078 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:53:35.926209  802078 start.go:159] libmachine.API.Create for "kindnet-983381" (driver="docker")
	I1206 09:53:35.926252  802078 client.go:173] LocalClient.Create starting
	I1206 09:53:35.926340  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:53:35.926378  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926405  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926506  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:53:35.926535  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926552  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926986  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:53:35.943608  802078 cli_runner.go:211] docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:53:35.943723  802078 network_create.go:284] running [docker network inspect kindnet-983381] to gather additional debugging logs...
	I1206 09:53:35.943748  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381
	W1206 09:53:35.960448  802078 cli_runner.go:211] docker network inspect kindnet-983381 returned with exit code 1
	I1206 09:53:35.960495  802078 network_create.go:287] error running [docker network inspect kindnet-983381]: docker network inspect kindnet-983381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-983381 not found
	I1206 09:53:35.960514  802078 network_create.go:289] output of [docker network inspect kindnet-983381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-983381 not found
	
	** /stderr **
	I1206 09:53:35.960636  802078 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:35.980401  802078 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:53:35.981149  802078 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:53:35.981925  802078 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:53:35.982560  802078 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fadb45f2248d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:97:af:e5:cc:0b} reservation:<nil>}
	I1206 09:53:35.983088  802078 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5d9447c39c3c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e2:61:5e:c6:7b:21} reservation:<nil>}
	I1206 09:53:35.983881  802078 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e6b5e0}
	I1206 09:53:35.983907  802078 network_create.go:124] attempt to create docker network kindnet-983381 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:53:35.983952  802078 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-983381 kindnet-983381
	I1206 09:53:36.037591  802078 network_create.go:108] docker network kindnet-983381 192.168.94.0/24 created
	I1206 09:53:36.037621  802078 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-983381" container
	I1206 09:53:36.037678  802078 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:53:36.055484  802078 cli_runner.go:164] Run: docker volume create kindnet-983381 --label name.minikube.sigs.k8s.io=kindnet-983381 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:53:36.074528  802078 oci.go:103] Successfully created a docker volume kindnet-983381
	I1206 09:53:36.074605  802078 cli_runner.go:164] Run: docker run --rm --name kindnet-983381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --entrypoint /usr/bin/test -v kindnet-983381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:53:36.495904  802078 oci.go:107] Successfully prepared a docker volume kindnet-983381
	I1206 09:53:36.495988  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:36.496004  802078 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:53:36.496085  802078 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:53:40.493659  802078 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.997500219s)
	I1206 09:53:40.493705  802078 kic.go:203] duration metric: took 3.997696888s to extract preloaded images to volume ...
	W1206 09:53:40.493857  802078 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:53:40.493908  802078 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:53:40.493960  802078 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:53:40.553379  802078 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-983381 --name kindnet-983381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-983381 --network kindnet-983381 --ip 192.168.94.2 --volume kindnet-983381:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	W1206 09:53:37.530880  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:39.530936  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:40.844704  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Running}}
	I1206 09:53:40.865257  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:40.884729  802078 cli_runner.go:164] Run: docker exec kindnet-983381 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:53:40.933934  802078 oci.go:144] the created container "kindnet-983381" has a running status.
	I1206 09:53:40.933992  802078 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa...
	I1206 09:53:41.065963  802078 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:53:41.097932  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.118699  802078 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:53:41.118719  802078 kic_runner.go:114] Args: [docker exec --privileged kindnet-983381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:53:41.177800  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.202566  802078 machine.go:94] provisionDockerMachine start ...
	I1206 09:53:41.202682  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.226294  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.226976  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.227014  802078 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:53:41.366826  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.366854  802078 ubuntu.go:182] provisioning hostname "kindnet-983381"
	I1206 09:53:41.366930  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.388560  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.388853  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.388868  802078 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-983381 && echo "kindnet-983381" | sudo tee /etc/hostname
	I1206 09:53:41.533194  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.533282  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.553319  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.553612  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.553649  802078 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-983381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-983381/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-983381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:53:41.687391  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:53:41.687422  802078 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:53:41.687496  802078 ubuntu.go:190] setting up certificates
	I1206 09:53:41.687511  802078 provision.go:84] configureAuth start
	I1206 09:53:41.687570  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:41.706986  802078 provision.go:143] copyHostCerts
	I1206 09:53:41.707057  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:53:41.707070  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:53:41.707141  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:53:41.707232  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:53:41.707242  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:53:41.707269  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:53:41.707336  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:53:41.707343  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:53:41.707366  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:53:41.707413  802078 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.kindnet-983381 san=[127.0.0.1 192.168.94.2 kindnet-983381 localhost minikube]
	I1206 09:53:41.806395  802078 provision.go:177] copyRemoteCerts
	I1206 09:53:41.806477  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:53:41.806526  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.825939  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:41.922925  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1206 09:53:41.943043  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:53:41.962026  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:53:41.980803  802078 provision.go:87] duration metric: took 293.274301ms to configureAuth
	I1206 09:53:41.980839  802078 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:53:41.981030  802078 config.go:182] Loaded profile config "kindnet-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:41.981180  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.001023  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:42.001294  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:42.001312  802078 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:53:42.284104  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:53:42.284126  802078 machine.go:97] duration metric: took 1.081535088s to provisionDockerMachine
	I1206 09:53:42.284136  802078 client.go:176] duration metric: took 6.35787804s to LocalClient.Create
	I1206 09:53:42.284158  802078 start.go:167] duration metric: took 6.357949811s to libmachine.API.Create "kindnet-983381"
	I1206 09:53:42.284171  802078 start.go:293] postStartSetup for "kindnet-983381" (driver="docker")
	I1206 09:53:42.284188  802078 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:53:42.284255  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:53:42.284310  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.302172  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.400807  802078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:53:42.404744  802078 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:53:42.404778  802078 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:53:42.404792  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:53:42.404846  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:53:42.404962  802078 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:53:42.405098  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:53:42.414846  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:42.437157  802078 start.go:296] duration metric: took 152.966336ms for postStartSetup
	I1206 09:53:42.437535  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.455950  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:42.456172  802078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:53:42.456212  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.474118  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.568913  802078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:53:42.573671  802078 start.go:128] duration metric: took 6.649231212s to createHost
	I1206 09:53:42.573696  802078 start.go:83] releasing machines lock for "kindnet-983381", held for 6.649375377s
	I1206 09:53:42.573776  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.593508  802078 ssh_runner.go:195] Run: cat /version.json
	I1206 09:53:42.593569  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.593516  802078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:53:42.593700  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.611419  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.612544  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.760558  802078 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:42.767177  802078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:53:42.803261  802078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:53:42.807859  802078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:53:42.807927  802078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:53:42.833482  802078 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:53:42.833508  802078 start.go:496] detecting cgroup driver to use...
	I1206 09:53:42.833546  802078 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:53:42.833599  802078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:53:42.849782  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:53:42.862043  802078 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:53:42.862089  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:53:42.879135  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:53:42.898925  802078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:53:42.987951  802078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:53:43.078627  802078 docker.go:234] disabling docker service ...
	I1206 09:53:43.078699  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:53:43.100368  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:53:43.113370  802078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:53:43.201176  802078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:53:43.294081  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:53:43.307631  802078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:53:43.321801  802078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:53:43.321856  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.331480  802078 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:53:43.331547  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.340367  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.349027  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.357421  802078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:53:43.365342  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.375704  802078 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.390512  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.399512  802078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:53:43.406601  802078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:53:43.413755  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:43.497188  802078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:53:43.639801  802078 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:53:43.639882  802078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:53:43.644037  802078 start.go:564] Will wait 60s for crictl version
	I1206 09:53:43.644085  802078 ssh_runner.go:195] Run: which crictl
	I1206 09:53:43.647878  802078 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:53:43.673703  802078 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:53:43.673775  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.703877  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.733325  802078 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:53:39.697901  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:41.698088  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:42.698368  789560 pod_ready.go:94] pod "coredns-66bc5c9577-gpnjq" is "Ready"
	I1206 09:53:42.698400  789560 pod_ready.go:86] duration metric: took 37.505994586s for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.700901  789560 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.705139  789560 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.705165  789560 pod_ready.go:86] duration metric: took 4.236162ms for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.707252  789560 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.711008  789560 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.711028  789560 pod_ready.go:86] duration metric: took 3.752374ms for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.713026  789560 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.897101  789560 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.897138  789560 pod_ready.go:86] duration metric: took 184.092641ms for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.097595  789560 pod_ready.go:83] waiting for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.497251  789560 pod_ready.go:94] pod "kube-proxy-jstq5" is "Ready"
	I1206 09:53:43.497282  789560 pod_ready.go:86] duration metric: took 399.656581ms for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.697290  789560 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096580  789560 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:44.096611  789560 pod_ready.go:86] duration metric: took 399.289382ms for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096627  789560 pod_ready.go:40] duration metric: took 38.907883012s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.141173  789560 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.143056  789560 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759696" cluster and "default" namespace by default
	W1206 09:53:42.029753  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:43.530143  792441 pod_ready.go:94] pod "coredns-66bc5c9577-kw8nl" is "Ready"
	I1206 09:53:43.530177  792441 pod_ready.go:86] duration metric: took 31.006235572s for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.532504  792441 pod_ready.go:83] waiting for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.539381  792441 pod_ready.go:94] pod "etcd-embed-certs-997968" is "Ready"
	I1206 09:53:43.539408  792441 pod_ready.go:86] duration metric: took 6.868509ms for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.541690  792441 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.545546  792441 pod_ready.go:94] pod "kube-apiserver-embed-certs-997968" is "Ready"
	I1206 09:53:43.545571  792441 pod_ready.go:86] duration metric: took 3.85484ms for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.547358  792441 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.728143  792441 pod_ready.go:94] pod "kube-controller-manager-embed-certs-997968" is "Ready"
	I1206 09:53:43.728172  792441 pod_ready.go:86] duration metric: took 180.793456ms for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.928272  792441 pod_ready.go:83] waiting for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.328082  792441 pod_ready.go:94] pod "kube-proxy-m2zpr" is "Ready"
	I1206 09:53:44.328117  792441 pod_ready.go:86] duration metric: took 399.817969ms for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.528776  792441 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927733  792441 pod_ready.go:94] pod "kube-scheduler-embed-certs-997968" is "Ready"
	I1206 09:53:44.927763  792441 pod_ready.go:86] duration metric: took 398.958608ms for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927778  792441 pod_ready.go:40] duration metric: took 32.40863001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.980591  792441 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.982680  792441 out.go:179] * Done! kubectl is now configured to use "embed-certs-997968" cluster and "default" namespace by default
	I1206 09:53:43.734370  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:43.751497  802078 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:43.755659  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.765989  802078 kubeadm.go:884] updating cluster {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:43.766104  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:43.766146  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.799525  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.799546  802078 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:43.799590  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.825735  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.825758  802078 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:43.825766  802078 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1206 09:53:43.825861  802078 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-983381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1206 09:53:43.825926  802078 ssh_runner.go:195] Run: crio config
	I1206 09:53:43.872261  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:43.872292  802078 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:43.872313  802078 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-983381 NodeName:kindnet-983381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:43.872443  802078 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-983381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:43.872538  802078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:43.881153  802078 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:43.881224  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:43.889391  802078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1206 09:53:43.903305  802078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:43.918720  802078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 09:53:43.931394  802078 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:43.935089  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.944888  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:44.030422  802078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:44.054751  802078 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381 for IP: 192.168.94.2
	I1206 09:53:44.054774  802078 certs.go:195] generating shared ca certs ...
	I1206 09:53:44.054796  802078 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.054979  802078 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:44.055055  802078 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:44.055074  802078 certs.go:257] generating profile certs ...
	I1206 09:53:44.055148  802078 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key
	I1206 09:53:44.055166  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt with IP's: []
	I1206 09:53:44.179136  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt ...
	I1206 09:53:44.179163  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt: {Name:mkbed0739e68db5951cd1670ef77a82b17aedb26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179330  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key ...
	I1206 09:53:44.179342  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key: {Name:mk3e2c0a04a2e3e8f578932802d27c8b90d53860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179422  802078 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3
	I1206 09:53:44.179436  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:53:44.342441  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 ...
	I1206 09:53:44.342476  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3: {Name:mk0af4503346333895c5c579d4fb2a8c9dcfdcee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342649  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 ...
	I1206 09:53:44.342667  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3: {Name:mk0a4a58e7f8845d02778448b3e5355101c2e3fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342770  802078 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt
	I1206 09:53:44.342868  802078 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key
	I1206 09:53:44.342951  802078 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key
	I1206 09:53:44.342972  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt with IP's: []
	I1206 09:53:44.520746  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt ...
	I1206 09:53:44.520773  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt: {Name:mk958624794dd2556a8291c9921b454b157f3c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.520946  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key ...
	I1206 09:53:44.520964  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key: {Name:mk18cc910abc009e83545d8f4f4f90e12f1bb752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.521164  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:44.521215  802078 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:44.521249  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:44.521292  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:44.521333  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:44.521369  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:44.521446  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:44.522095  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:44.543389  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:44.561934  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:44.580049  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:44.597598  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:53:44.616278  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:44.633517  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:44.650699  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:53:44.668505  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:44.687149  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:44.704280  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:44.721767  802078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:44.734395  802078 ssh_runner.go:195] Run: openssl version
	I1206 09:53:44.740626  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.747691  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:44.755054  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758535  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758584  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.794842  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.803151  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.811376  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.819662  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:44.827450  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831929  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831984  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.868641  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:44.877016  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:53:44.884481  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.892077  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:44.899367  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903291  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903338  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.949126  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:44.957677  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:53:44.966120  802078 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:44.970308  802078 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:53:44.970374  802078 kubeadm.go:401] StartCluster: {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:44.970477  802078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:44.970560  802078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:45.001343  802078 cri.go:89] found id: ""
	I1206 09:53:45.001414  802078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:45.012421  802078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:53:45.021194  802078 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:53:45.021261  802078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:53:45.029115  802078 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:53:45.029134  802078 kubeadm.go:158] found existing configuration files:
	
	I1206 09:53:45.029169  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:53:45.037815  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:53:45.037872  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:53:45.045435  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:53:45.053955  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:53:45.054012  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:53:45.062303  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.070593  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:53:45.070643  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.079208  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:53:45.088134  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:53:45.088189  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:53:45.095979  802078 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:53:45.138079  802078 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:53:45.138199  802078 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:53:45.160175  802078 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:53:45.160254  802078 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:53:45.160285  802078 kubeadm.go:319] OS: Linux
	I1206 09:53:45.160352  802078 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:53:45.160443  802078 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:53:45.160554  802078 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:53:45.160647  802078 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:53:45.160734  802078 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:53:45.160812  802078 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:53:45.160892  802078 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:53:45.160962  802078 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:53:45.221322  802078 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:53:45.221523  802078 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:53:45.221688  802078 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:53:45.229074  802078 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:53:40.367608  796626 addons.go:530] duration metric: took 1.818454225s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1206 09:53:41.032946  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:43.533007  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:45.230909  802078 out.go:252]   - Generating certificates and keys ...
	I1206 09:53:45.231011  802078 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:53:45.231124  802078 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:53:45.410620  802078 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:53:45.930986  802078 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:53:46.263989  802078 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:53:46.476019  802078 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:53:46.655346  802078 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:53:46.655593  802078 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.754725  802078 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:53:46.754894  802078 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.832327  802078 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:53:46.992545  802078 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:53:47.179111  802078 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:53:47.179231  802078 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:53:47.446389  802078 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:53:47.805253  802078 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:53:48.039364  802078 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:53:48.570846  802078 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:53:48.856028  802078 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:53:48.856598  802078 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:53:48.860303  802078 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1206 09:53:46.032859  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:48.532015  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:48.862377  802078 out.go:252]   - Booting up control plane ...
	I1206 09:53:48.862492  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:53:48.862569  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:53:48.862631  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:53:48.876239  802078 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:53:48.876432  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:53:48.883203  802078 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:53:48.883360  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:53:48.883405  802078 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:53:48.990510  802078 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:53:48.990684  802078 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:53:50.991982  802078 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001590138s
	I1206 09:53:50.996336  802078 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:53:50.996527  802078 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:53:50.996665  802078 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:53:50.996797  802078 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:53:52.001314  802078 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004895696s
	I1206 09:53:52.920797  802078 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.924413306s
	I1206 09:53:54.497811  802078 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501431632s
	I1206 09:53:54.513719  802078 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:53:54.523571  802078 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:53:54.531849  802078 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:53:54.532153  802078 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-983381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:53:54.540058  802078 kubeadm.go:319] [bootstrap-token] Using token: prjydb.psh7t9q7oigrozcv
	W1206 09:53:51.032320  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:53.032413  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:54.541278  802078 out.go:252]   - Configuring RBAC rules ...
	I1206 09:53:54.541415  802078 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:53:54.544151  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:53:54.548808  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:53:54.551043  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:53:54.553366  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:53:54.556175  802078 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:53:54.904103  802078 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:53:55.318278  802078 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:53:55.904952  802078 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:53:55.906231  802078 kubeadm.go:319] 
	I1206 09:53:55.906359  802078 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:53:55.906379  802078 kubeadm.go:319] 
	I1206 09:53:55.906487  802078 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:53:55.906517  802078 kubeadm.go:319] 
	I1206 09:53:55.906565  802078 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:53:55.906639  802078 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:53:55.906715  802078 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:53:55.906721  802078 kubeadm.go:319] 
	I1206 09:53:55.906789  802078 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:53:55.906795  802078 kubeadm.go:319] 
	I1206 09:53:55.906852  802078 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:53:55.906863  802078 kubeadm.go:319] 
	I1206 09:53:55.906921  802078 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:53:55.907032  802078 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:53:55.907129  802078 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:53:55.907138  802078 kubeadm.go:319] 
	I1206 09:53:55.907270  802078 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:53:55.907380  802078 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:53:55.907390  802078 kubeadm.go:319] 
	I1206 09:53:55.907524  802078 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.907678  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:53:55.907711  802078 kubeadm.go:319] 	--control-plane 
	I1206 09:53:55.907722  802078 kubeadm.go:319] 
	I1206 09:53:55.907839  802078 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:53:55.907848  802078 kubeadm.go:319] 
	I1206 09:53:55.907970  802078 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.908121  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:53:55.911115  802078 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:53:55.911281  802078 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:53:55.911328  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:55.912757  802078 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 06 09:53:16 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:16.820060643Z" level=info msg="Created container 8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v/kubernetes-dashboard" id=9364b38c-a97a-447c-9714-d5e50f265de4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:16 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:16.821626806Z" level=info msg="Starting container: 8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b" id=36ad077b-db8c-4606-85a4-b4ee2c91f9b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:16 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:16.824850396Z" level=info msg="Started container" PID=1733 containerID=8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v/kubernetes-dashboard id=36ad077b-db8c-4606-85a4-b4ee2c91f9b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0aa2450220294f70e701066d5c6f793dd791fe5100d9dee55a3e999f11bccb7b
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.085520815Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97d04ad4-6847-4099-8292-734b33cf9cfe name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.086521829Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f841246b-99b4-4d70-9525-7ad37121718c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.087692852Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper" id=85d51716-eb2c-4143-9e10-11ed45c9e59b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.087845027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.097736767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.098377998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.167325663Z" level=info msg="Created container 37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper" id=85d51716-eb2c-4143-9e10-11ed45c9e59b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.16804189Z" level=info msg="Starting container: 37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501" id=0c8ac657-af53-44b5-9283-281076b8f5c7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.170594792Z" level=info msg="Started container" PID=1759 containerID=37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper id=0c8ac657-af53-44b5-9283-281076b8f5c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6d1f8e343105b0cac908bfe5e48a02dfa576b76b421c628678fcefdd68db100
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.229404396Z" level=info msg="Removing container: e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99" id=15c32d1d-a9dd-4426-8f4d-0a7ea3427826 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.230646458Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=33abc955-b265-4797-bc1a-02836ffa4d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.231772519Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=91713963-0375-4b3d-886d-d1de60c842e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.233670432Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8b0a393e-e5c2-47a8-ba48-055c674b1fb8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.233806105Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.2393148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.23953963Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/76ebadc07af5ab4bb546b2f3a68f892570b1aaac1529ba662f7eb538ecf5c326/merged/etc/passwd: no such file or directory"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.23960133Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/76ebadc07af5ab4bb546b2f3a68f892570b1aaac1529ba662f7eb538ecf5c326/merged/etc/group: no such file or directory"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.239958156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.242791113Z" level=info msg="Removed container e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper" id=15c32d1d-a9dd-4426-8f4d-0a7ea3427826 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.276747642Z" level=info msg="Created container fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a: kube-system/storage-provisioner/storage-provisioner" id=8b0a393e-e5c2-47a8-ba48-055c674b1fb8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.277828102Z" level=info msg="Starting container: fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a" id=d5fb966f-78a7-4da6-8c66-7b249a7bc624 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.279740838Z" level=info msg="Started container" PID=1769 containerID=fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a description=kube-system/storage-provisioner/storage-provisioner id=d5fb966f-78a7-4da6-8c66-7b249a7bc624 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6aeb451bc613516f240bdf6e24a876fca2e63b1dfae8208ecf7b8f6598f6daa5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	fb21d006ca9cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   6aeb451bc6135       storage-provisioner                                    kube-system
	37de16197509e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   e6d1f8e343105       dashboard-metrics-scraper-6ffb444bf9-ktwz5             kubernetes-dashboard
	8e7c35c6460d6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   0aa2450220294       kubernetes-dashboard-855c9754f9-tkv7v                  kubernetes-dashboard
	0cfc5bf6ac0e2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   f01b819ca6a21       coredns-66bc5c9577-gpnjq                               kube-system
	0fd27514e3b63       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   543b62bad9b5b       kube-proxy-jstq5                                       kube-system
	600b10213ef00       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   ddc1a31d647eb       busybox                                                default
	1289e6d7da285       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   6aeb451bc6135       storage-provisioner                                    kube-system
	47b3688c94fe5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   b00778066b638       kindnet-cv6n8                                          kube-system
	49d5db0bf8c81       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   e42efaef3da95       kube-controller-manager-default-k8s-diff-port-759696   kube-system
	2b4e13927c1dd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   a1b1ad14dfee3       kube-scheduler-default-k8s-diff-port-759696            kube-system
	96bf17c21fc5e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   f8ad85e2a00c7       etcd-default-k8s-diff-port-759696                      kube-system
	5081ea10eaf55       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   19d38df27e140       kube-apiserver-default-k8s-diff-port-759696            kube-system
	
	
	==> coredns [0cfc5bf6ac0e2acc0bfc6a44706294eb6472b9a0b6da79d346ae3dfa437729de] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42467 - 37979 "HINFO IN 5694450656926657780.8451394918645071831. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025745246s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-759696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-759696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=default-k8s-diff-port-759696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-759696
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:53:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-759696
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                66717458-de25-4b46-9089-82e699ed1547
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-gpnjq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-759696                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-cv6n8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-759696             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-759696    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-jstq5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-759696             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ktwz5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tkv7v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node default-k8s-diff-port-759696 event: Registered Node default-k8s-diff-port-759696 in Controller
	  Normal  NodeReady                96s                  kubelet          Node default-k8s-diff-port-759696 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node default-k8s-diff-port-759696 event: Registered Node default-k8s-diff-port-759696 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [96bf17c21fc5ef4c1b3dca26666987c3ead355280a820de4ef784becde9de15b] <==
	{"level":"warn","ts":"2025-12-06T09:53:02.909273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.917974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.924348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.930640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.937430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.945039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.952080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.959823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.967747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.977656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.985980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.993785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.001676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.014961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.021999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.030066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.037224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.044344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.051351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.057990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.065130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.087773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.094980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.101792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.155538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56328","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:53:58 up  2:36,  0 user,  load average: 4.10, 3.34, 3.37
	Linux default-k8s-diff-port-759696 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [47b3688c94fe5f8791be5571032439e7f24a58e707a037c30b8f448c060aafe2] <==
	I1206 09:53:04.537444       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:53:04.537676       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:53:04.537792       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:53:04.537808       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:53:04.537828       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:53:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:53:04.835436       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:53:04.835480       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:53:04.835493       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:53:04.835618       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:53:05.135861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:53:05.135884       1 metrics.go:72] Registering metrics
	I1206 09:53:05.135937       1 controller.go:711] "Syncing nftables rules"
	I1206 09:53:14.835888       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:14.835978       1 main.go:301] handling current node
	I1206 09:53:24.841488       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:24.841544       1 main.go:301] handling current node
	I1206 09:53:34.835776       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:34.835810       1 main.go:301] handling current node
	I1206 09:53:44.836582       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:44.836632       1 main.go:301] handling current node
	I1206 09:53:54.836861       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:54.836904       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5081ea10eaf550a1552364d04b9716dd633af5964fac9bc876f2cc1e5ca71b16] <==
	I1206 09:53:03.640520       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:53:03.640531       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:53:03.640734       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:53:03.640534       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:53:03.640853       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:53:03.640861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:53:03.640942       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:53:03.649026       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:53:03.665425       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:53:03.674698       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 09:53:03.674722       1 policy_source.go:240] refreshing policies
	I1206 09:53:03.680509       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:53:03.691726       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:53:03.908709       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:53:03.935267       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:53:03.965027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:53:03.973974       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:53:03.982214       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:53:04.014932       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.49.220"}
	I1206 09:53:04.025715       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.217.234"}
	I1206 09:53:04.544196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:53:06.981055       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:06.981105       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:07.331948       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:53:07.530305       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [49d5db0bf8c817844e681d0c272f78bea45bd7a69be93dbd6b87ce00764c41c3] <==
	I1206 09:53:06.943801       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:53:06.946027       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:53:06.947516       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:53:06.959788       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:53:06.962068       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:53:06.964443       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:53:06.976903       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:53:06.976914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:53:06.977177       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:53:06.977280       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:53:06.977284       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:53:06.977335       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:53:06.977606       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:53:06.987612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:06.993029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:06.997790       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:53:06.999033       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:53:07.003184       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:53:07.005812       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:53:07.009143       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:53:07.011413       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:53:07.020760       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:53:07.026539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:53:07.026557       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:53:07.026565       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0fd27514e3b63af2ee66e57c5e0d3db6ac0f18efcf343dbefc6b2f2e256584f0] <==
	I1206 09:53:04.457309       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:53:04.524317       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:53:04.624645       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:53:04.624831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:53:04.624938       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:53:04.647300       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:53:04.647361       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:53:04.653881       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:53:04.654439       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:53:04.654483       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:04.656181       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:53:04.656261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:53:04.656294       1 config.go:200] "Starting service config controller"
	I1206 09:53:04.656299       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:53:04.656314       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:53:04.656319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:53:04.656355       1 config.go:309] "Starting node config controller"
	I1206 09:53:04.656371       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:53:04.756434       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:53:04.756502       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:53:04.756502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:53:04.757220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2b4e13927c1dd98b75c5d83e4aec397dc2e4749caaf7821cfac821811b1d3da7] <==
	I1206 09:53:02.016503       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:53:03.580595       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:53:03.580651       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:53:03.580665       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:53:03.580674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:53:03.601367       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:53:03.601417       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:03.607263       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:53:03.607582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:03.607647       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:03.607692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:53:03.707907       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:53:07 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:07.717733     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446x8\" (UniqueName: \"kubernetes.io/projected/6b384527-3c93-4f55-839a-bae4f1b854db-kube-api-access-446x8\") pod \"kubernetes-dashboard-855c9754f9-tkv7v\" (UID: \"6b384527-3c93-4f55-839a-bae4f1b854db\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v"
	Dec 06 09:53:07 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:07.717762     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/87d4686a-1882-4a7b-adce-610fbd373a5d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ktwz5\" (UID: \"87d4686a-1882-4a7b-adce-610fbd373a5d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5"
	Dec 06 09:53:11 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:11.155169     726 scope.go:117] "RemoveContainer" containerID="016d01c477f8bc018e19ac9ac703912dbe90708ac8cbaff48e4d80fdd2177ae2"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:12.160238     726 scope.go:117] "RemoveContainer" containerID="016d01c477f8bc018e19ac9ac703912dbe90708ac8cbaff48e4d80fdd2177ae2"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:12.160423     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:12.161309     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:12.479298     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 06 09:53:13 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:13.165976     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:13 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:13.166159     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:17 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:17.195319     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v" podStartSLOduration=1.347345699 podStartE2EDuration="10.195296814s" podCreationTimestamp="2025-12-06 09:53:07 +0000 UTC" firstStartedPulling="2025-12-06 09:53:07.930043656 +0000 UTC m=+7.009201620" lastFinishedPulling="2025-12-06 09:53:16.777994783 +0000 UTC m=+15.857152735" observedRunningTime="2025-12-06 09:53:17.194918051 +0000 UTC m=+16.274076018" watchObservedRunningTime="2025-12-06 09:53:17.195296814 +0000 UTC m=+16.274454784"
	Dec 06 09:53:19 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:19.906667     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:19 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:19.906893     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.084936     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.228160     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.228406     726 scope.go:117] "RemoveContainer" containerID="37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:35.228750     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.230129     726 scope.go:117] "RemoveContainer" containerID="1289e6d7da285692d4fa714fc6797eeba4ead826886d680935fba4c4461f6875"
	Dec 06 09:53:39 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:39.906755     726 scope.go:117] "RemoveContainer" containerID="37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	Dec 06 09:53:39 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:39.906977     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:55 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:55.084898     726 scope.go:117] "RemoveContainer" containerID="37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	Dec 06 09:53:55 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:55.085131     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: kubelet.service: Consumed 1.824s CPU time.
	
	
	==> kubernetes-dashboard [8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b] <==
	2025/12/06 09:53:16 Starting overwatch
	2025/12/06 09:53:16 Using namespace: kubernetes-dashboard
	2025/12/06 09:53:16 Using in-cluster config to connect to apiserver
	2025/12/06 09:53:16 Using secret token for csrf signing
	2025/12/06 09:53:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:53:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:53:16 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:53:16 Generating JWE encryption key
	2025/12/06 09:53:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:53:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:53:16 Initializing JWE encryption key from synchronized object
	2025/12/06 09:53:16 Creating in-cluster Sidecar client
	2025/12/06 09:53:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:53:16 Serving insecurely on HTTP port: 9090
	2025/12/06 09:53:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1289e6d7da285692d4fa714fc6797eeba4ead826886d680935fba4c4461f6875] <==
	I1206 09:53:04.421756       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:53:34.424547       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a] <==
	I1206 09:53:35.291957       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:53:35.300028       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:53:35.300083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:53:35.302793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:38.759407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:43.024447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:46.623226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:49.677115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:52.699843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:52.705223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:52.705367       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:53:52.705582       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759696_0ffa14f0-2b59-4526-93c8-04417567bbe6!
	I1206 09:53:52.705610       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"319c0bdb-ab5a-4a15-8303-dcd154877547", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-759696_0ffa14f0-2b59-4526-93c8-04417567bbe6 became leader
	W1206 09:53:52.707319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:52.711269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:52.805742       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759696_0ffa14f0-2b59-4526-93c8-04417567bbe6!
	W1206 09:53:54.713908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:54.719110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:56.723722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:56.729300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:58.732730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:58.736616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696: exit status 2 (354.849329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-759696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-759696
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-759696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87",
	        "Created": "2025-12-06T09:51:52.674641004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 789836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:54.636676495Z",
	            "FinishedAt": "2025-12-06T09:52:53.780352741Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/hosts",
	        "LogPath": "/var/lib/docker/containers/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87/7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87-json.log",
	        "Name": "/default-k8s-diff-port-759696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-759696:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-759696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e15a599707914a015b1614444a4cc7c30cb1f593e0d0ce6f8e12d2570b38f87",
	                "LowerDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38ec703e39eee5cc8301a96f7b6e8cc72997d28b9b066af8be326fffd278b590/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-759696",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-759696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-759696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-759696",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-759696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "44caf01710ea177df544b1306f7cc3fd12de4a88f3a26cf23bef177a9ef6402a",
	            "SandboxKey": "/var/run/docker/netns/44caf01710ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-759696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8eafe0b310a8d3a7cc2c2f8b223b86754d5d6f80cb6837e1258939016171b84",
	                    "EndpointID": "9e1bfdf35fd6bc6251ac8252d9247e2e6afb82f8f94e959e82322f65797aed10",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6a:07:11:62:8a:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-759696",
	                        "7e15a5997079"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696: exit status 2 (376.577726ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759696 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-759696 logs -n 25: (1.372704599s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-983381                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ no-preload-521770 image list --format=json                                                                                                                                                                                                           │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p no-preload-521770 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p kindnet-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-983381               │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ default-k8s-diff-port-759696 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p default-k8s-diff-port-759696 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ embed-certs-997968 image list --format=json                                                                                                                                                                                                          │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-997968 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:35.736114  802078 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:35.736358  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736366  802078 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:35.736370  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736608  802078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:35.737088  802078 out.go:368] Setting JSON to false
	I1206 09:53:35.738323  802078 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9360,"bootTime":1765005456,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:35.738388  802078 start.go:143] virtualization: kvm guest
	I1206 09:53:35.740317  802078 out.go:179] * [kindnet-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:35.741422  802078 notify.go:221] Checking for updates...
	I1206 09:53:35.741506  802078 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:35.742495  802078 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:35.743616  802078 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:35.744630  802078 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:35.745749  802078 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:35.746924  802078 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:35.748304  802078 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748393  802078 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748491  802078 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748589  802078 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:35.772982  802078 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:35.773088  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.830680  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.820532325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.830809  802078 docker.go:319] overlay module found
	I1206 09:53:35.832481  802078 out.go:179] * Using the docker driver based on user configuration
	I1206 09:53:35.833543  802078 start.go:309] selected driver: docker
	I1206 09:53:35.833558  802078 start.go:927] validating driver "docker" against <nil>
	I1206 09:53:35.833571  802078 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:35.834109  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.894209  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.883098075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.894359  802078 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:53:35.894710  802078 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:35.896340  802078 out.go:179] * Using Docker driver with root privileges
	I1206 09:53:35.897286  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:35.897302  802078 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:53:35.897380  802078 start.go:353] cluster config:
	{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:35.898495  802078 out.go:179] * Starting "kindnet-983381" primary control-plane node in "kindnet-983381" cluster
	I1206 09:53:35.899494  802078 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:35.900543  802078 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:35.901765  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:35.901802  802078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:35.901815  802078 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:35.901854  802078 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:35.901908  802078 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:35.901922  802078 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:35.902031  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:35.902059  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json: {Name:mk3a79de74bde68ec31b151eacb622c73b38daf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:35.924146  802078 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:35.924170  802078 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:35.924185  802078 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:35.924214  802078 start.go:360] acquireMachinesLock for kindnet-983381: {Name:mk6e4785105686f4f72d41f8081d2646bcdec596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:35.924309  802078 start.go:364] duration metric: took 76.057µs to acquireMachinesLock for "kindnet-983381"
	I1206 09:53:35.924331  802078 start.go:93] Provisioning new machine with config: &{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:35.924423  802078 start.go:125] createHost starting for "" (driver="docker")
	W1206 09:53:33.030174  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:35.530789  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:35.464753  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:35.964668  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.464301  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.964782  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.464682  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.965001  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.464290  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.545704  796626 kubeadm.go:1114] duration metric: took 4.655385618s to wait for elevateKubeSystemPrivileges
	I1206 09:53:38.545746  796626 kubeadm.go:403] duration metric: took 16.212898927s to StartCluster
	I1206 09:53:38.545772  796626 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.545859  796626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:38.548341  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.548970  796626 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:38.548998  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:53:38.549213  796626 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:38.549132  796626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:53:38.549388  796626 addons.go:70] Setting storage-provisioner=true in profile "auto-983381"
	I1206 09:53:38.549417  796626 addons.go:239] Setting addon storage-provisioner=true in "auto-983381"
	I1206 09:53:38.549483  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.550037  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.549392  796626 addons.go:70] Setting default-storageclass=true in profile "auto-983381"
	I1206 09:53:38.550507  796626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-983381"
	I1206 09:53:38.550833  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.550926  796626 out.go:179] * Verifying Kubernetes components...
	I1206 09:53:38.553410  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:38.588099  796626 addons.go:239] Setting addon default-storageclass=true in "auto-983381"
	I1206 09:53:38.588150  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.588574  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.617638  796626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.617665  796626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:53:38.617866  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.626643  796626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1206 09:53:35.198662  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:37.200822  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:38.638810  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.668107  796626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:38.668132  796626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:38.668195  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.675658  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:53:38.692832  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.702368  796626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:38.748795  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.801277  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:39.027533  796626 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:53:39.029182  796626 node_ready.go:35] waiting up to 15m0s for node "auto-983381" to be "Ready" ...
	I1206 09:53:39.654289  796626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-983381" context rescaled to 1 replicas
	I1206 09:53:40.345178  796626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.543850325s)
	I1206 09:53:40.346941  796626 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1206 09:53:35.925937  802078 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:53:35.926209  802078 start.go:159] libmachine.API.Create for "kindnet-983381" (driver="docker")
	I1206 09:53:35.926252  802078 client.go:173] LocalClient.Create starting
	I1206 09:53:35.926340  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:53:35.926378  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926405  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926506  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:53:35.926535  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926552  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926986  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:53:35.943608  802078 cli_runner.go:211] docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:53:35.943723  802078 network_create.go:284] running [docker network inspect kindnet-983381] to gather additional debugging logs...
	I1206 09:53:35.943748  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381
	W1206 09:53:35.960448  802078 cli_runner.go:211] docker network inspect kindnet-983381 returned with exit code 1
	I1206 09:53:35.960495  802078 network_create.go:287] error running [docker network inspect kindnet-983381]: docker network inspect kindnet-983381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-983381 not found
	I1206 09:53:35.960514  802078 network_create.go:289] output of [docker network inspect kindnet-983381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-983381 not found
	
	** /stderr **
	I1206 09:53:35.960636  802078 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:35.980401  802078 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:53:35.981149  802078 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:53:35.981925  802078 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:53:35.982560  802078 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fadb45f2248d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:97:af:e5:cc:0b} reservation:<nil>}
	I1206 09:53:35.983088  802078 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5d9447c39c3c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e2:61:5e:c6:7b:21} reservation:<nil>}
	I1206 09:53:35.983881  802078 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e6b5e0}
	I1206 09:53:35.983907  802078 network_create.go:124] attempt to create docker network kindnet-983381 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:53:35.983952  802078 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-983381 kindnet-983381
	I1206 09:53:36.037591  802078 network_create.go:108] docker network kindnet-983381 192.168.94.0/24 created
	I1206 09:53:36.037621  802078 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-983381" container
	I1206 09:53:36.037678  802078 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:53:36.055484  802078 cli_runner.go:164] Run: docker volume create kindnet-983381 --label name.minikube.sigs.k8s.io=kindnet-983381 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:53:36.074528  802078 oci.go:103] Successfully created a docker volume kindnet-983381
	I1206 09:53:36.074605  802078 cli_runner.go:164] Run: docker run --rm --name kindnet-983381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --entrypoint /usr/bin/test -v kindnet-983381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:53:36.495904  802078 oci.go:107] Successfully prepared a docker volume kindnet-983381
	I1206 09:53:36.495988  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:36.496004  802078 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:53:36.496085  802078 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:53:40.493659  802078 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.997500219s)
	I1206 09:53:40.493705  802078 kic.go:203] duration metric: took 3.997696888s to extract preloaded images to volume ...
	W1206 09:53:40.493857  802078 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:53:40.493908  802078 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:53:40.493960  802078 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:53:40.553379  802078 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-983381 --name kindnet-983381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-983381 --network kindnet-983381 --ip 192.168.94.2 --volume kindnet-983381:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	W1206 09:53:37.530880  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:39.530936  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:40.844704  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Running}}
	I1206 09:53:40.865257  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:40.884729  802078 cli_runner.go:164] Run: docker exec kindnet-983381 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:53:40.933934  802078 oci.go:144] the created container "kindnet-983381" has a running status.
	I1206 09:53:40.933992  802078 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa...
	I1206 09:53:41.065963  802078 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:53:41.097932  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.118699  802078 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:53:41.118719  802078 kic_runner.go:114] Args: [docker exec --privileged kindnet-983381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:53:41.177800  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.202566  802078 machine.go:94] provisionDockerMachine start ...
	I1206 09:53:41.202682  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.226294  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.226976  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.227014  802078 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:53:41.366826  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.366854  802078 ubuntu.go:182] provisioning hostname "kindnet-983381"
	I1206 09:53:41.366930  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.388560  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.388853  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.388868  802078 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-983381 && echo "kindnet-983381" | sudo tee /etc/hostname
	I1206 09:53:41.533194  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.533282  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.553319  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.553612  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.553649  802078 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-983381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-983381/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-983381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:53:41.687391  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:53:41.687422  802078 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:53:41.687496  802078 ubuntu.go:190] setting up certificates
	I1206 09:53:41.687511  802078 provision.go:84] configureAuth start
	I1206 09:53:41.687570  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:41.706986  802078 provision.go:143] copyHostCerts
	I1206 09:53:41.707057  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:53:41.707070  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:53:41.707141  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:53:41.707232  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:53:41.707242  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:53:41.707269  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:53:41.707336  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:53:41.707343  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:53:41.707366  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:53:41.707413  802078 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.kindnet-983381 san=[127.0.0.1 192.168.94.2 kindnet-983381 localhost minikube]
	I1206 09:53:41.806395  802078 provision.go:177] copyRemoteCerts
	I1206 09:53:41.806477  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:53:41.806526  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.825939  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:41.922925  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1206 09:53:41.943043  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:53:41.962026  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:53:41.980803  802078 provision.go:87] duration metric: took 293.274301ms to configureAuth
	I1206 09:53:41.980839  802078 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:53:41.981030  802078 config.go:182] Loaded profile config "kindnet-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:41.981180  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.001023  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:42.001294  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:42.001312  802078 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:53:42.284104  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:53:42.284126  802078 machine.go:97] duration metric: took 1.081535088s to provisionDockerMachine
	I1206 09:53:42.284136  802078 client.go:176] duration metric: took 6.35787804s to LocalClient.Create
	I1206 09:53:42.284158  802078 start.go:167] duration metric: took 6.357949811s to libmachine.API.Create "kindnet-983381"
	I1206 09:53:42.284171  802078 start.go:293] postStartSetup for "kindnet-983381" (driver="docker")
	I1206 09:53:42.284188  802078 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:53:42.284255  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:53:42.284310  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.302172  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.400807  802078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:53:42.404744  802078 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:53:42.404778  802078 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:53:42.404792  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:53:42.404846  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:53:42.404962  802078 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:53:42.405098  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:53:42.414846  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:42.437157  802078 start.go:296] duration metric: took 152.966336ms for postStartSetup
	I1206 09:53:42.437535  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.455950  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:42.456172  802078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:53:42.456212  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.474118  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.568913  802078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:53:42.573671  802078 start.go:128] duration metric: took 6.649231212s to createHost
	I1206 09:53:42.573696  802078 start.go:83] releasing machines lock for "kindnet-983381", held for 6.649375377s
	I1206 09:53:42.573776  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.593508  802078 ssh_runner.go:195] Run: cat /version.json
	I1206 09:53:42.593569  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.593516  802078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:53:42.593700  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.611419  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.612544  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.760558  802078 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:42.767177  802078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:53:42.803261  802078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:53:42.807859  802078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:53:42.807927  802078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:53:42.833482  802078 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:53:42.833508  802078 start.go:496] detecting cgroup driver to use...
	I1206 09:53:42.833546  802078 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:53:42.833599  802078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:53:42.849782  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:53:42.862043  802078 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:53:42.862089  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:53:42.879135  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:53:42.898925  802078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:53:42.987951  802078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:53:43.078627  802078 docker.go:234] disabling docker service ...
	I1206 09:53:43.078699  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:53:43.100368  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:53:43.113370  802078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:53:43.201176  802078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:53:43.294081  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:53:43.307631  802078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:53:43.321801  802078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:53:43.321856  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.331480  802078 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:53:43.331547  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.340367  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.349027  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.357421  802078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:53:43.365342  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.375704  802078 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.390512  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.399512  802078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:53:43.406601  802078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:53:43.413755  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:43.497188  802078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:53:43.639801  802078 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:53:43.639882  802078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:53:43.644037  802078 start.go:564] Will wait 60s for crictl version
	I1206 09:53:43.644085  802078 ssh_runner.go:195] Run: which crictl
	I1206 09:53:43.647878  802078 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:53:43.673703  802078 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:53:43.673775  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.703877  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.733325  802078 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:53:39.697901  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:41.698088  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:42.698368  789560 pod_ready.go:94] pod "coredns-66bc5c9577-gpnjq" is "Ready"
	I1206 09:53:42.698400  789560 pod_ready.go:86] duration metric: took 37.505994586s for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.700901  789560 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.705139  789560 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.705165  789560 pod_ready.go:86] duration metric: took 4.236162ms for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.707252  789560 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.711008  789560 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.711028  789560 pod_ready.go:86] duration metric: took 3.752374ms for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.713026  789560 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.897101  789560 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.897138  789560 pod_ready.go:86] duration metric: took 184.092641ms for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.097595  789560 pod_ready.go:83] waiting for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.497251  789560 pod_ready.go:94] pod "kube-proxy-jstq5" is "Ready"
	I1206 09:53:43.497282  789560 pod_ready.go:86] duration metric: took 399.656581ms for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.697290  789560 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096580  789560 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:44.096611  789560 pod_ready.go:86] duration metric: took 399.289382ms for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096627  789560 pod_ready.go:40] duration metric: took 38.907883012s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.141173  789560 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.143056  789560 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759696" cluster and "default" namespace by default
	W1206 09:53:42.029753  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:43.530143  792441 pod_ready.go:94] pod "coredns-66bc5c9577-kw8nl" is "Ready"
	I1206 09:53:43.530177  792441 pod_ready.go:86] duration metric: took 31.006235572s for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.532504  792441 pod_ready.go:83] waiting for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.539381  792441 pod_ready.go:94] pod "etcd-embed-certs-997968" is "Ready"
	I1206 09:53:43.539408  792441 pod_ready.go:86] duration metric: took 6.868509ms for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.541690  792441 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.545546  792441 pod_ready.go:94] pod "kube-apiserver-embed-certs-997968" is "Ready"
	I1206 09:53:43.545571  792441 pod_ready.go:86] duration metric: took 3.85484ms for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.547358  792441 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.728143  792441 pod_ready.go:94] pod "kube-controller-manager-embed-certs-997968" is "Ready"
	I1206 09:53:43.728172  792441 pod_ready.go:86] duration metric: took 180.793456ms for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.928272  792441 pod_ready.go:83] waiting for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.328082  792441 pod_ready.go:94] pod "kube-proxy-m2zpr" is "Ready"
	I1206 09:53:44.328117  792441 pod_ready.go:86] duration metric: took 399.817969ms for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.528776  792441 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927733  792441 pod_ready.go:94] pod "kube-scheduler-embed-certs-997968" is "Ready"
	I1206 09:53:44.927763  792441 pod_ready.go:86] duration metric: took 398.958608ms for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927778  792441 pod_ready.go:40] duration metric: took 32.40863001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.980591  792441 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.982680  792441 out.go:179] * Done! kubectl is now configured to use "embed-certs-997968" cluster and "default" namespace by default
	I1206 09:53:43.734370  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:43.751497  802078 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:43.755659  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.765989  802078 kubeadm.go:884] updating cluster {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:43.766104  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:43.766146  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.799525  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.799546  802078 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:43.799590  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.825735  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.825758  802078 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:43.825766  802078 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1206 09:53:43.825861  802078 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-983381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1206 09:53:43.825926  802078 ssh_runner.go:195] Run: crio config
	I1206 09:53:43.872261  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:43.872292  802078 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:43.872313  802078 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-983381 NodeName:kindnet-983381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:43.872443  802078 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-983381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:43.872538  802078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:43.881153  802078 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:43.881224  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:43.889391  802078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1206 09:53:43.903305  802078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:43.918720  802078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 09:53:43.931394  802078 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:43.935089  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.944888  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:44.030422  802078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:44.054751  802078 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381 for IP: 192.168.94.2
	I1206 09:53:44.054774  802078 certs.go:195] generating shared ca certs ...
	I1206 09:53:44.054796  802078 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.054979  802078 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:44.055055  802078 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:44.055074  802078 certs.go:257] generating profile certs ...
	I1206 09:53:44.055148  802078 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key
	I1206 09:53:44.055166  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt with IP's: []
	I1206 09:53:44.179136  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt ...
	I1206 09:53:44.179163  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt: {Name:mkbed0739e68db5951cd1670ef77a82b17aedb26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179330  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key ...
	I1206 09:53:44.179342  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key: {Name:mk3e2c0a04a2e3e8f578932802d27c8b90d53860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179422  802078 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3
	I1206 09:53:44.179436  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:53:44.342441  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 ...
	I1206 09:53:44.342476  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3: {Name:mk0af4503346333895c5c579d4fb2a8c9dcfdcee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342649  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 ...
	I1206 09:53:44.342667  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3: {Name:mk0a4a58e7f8845d02778448b3e5355101c2e3fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342770  802078 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt
	I1206 09:53:44.342868  802078 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key
	I1206 09:53:44.342951  802078 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key
	I1206 09:53:44.342972  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt with IP's: []
	I1206 09:53:44.520746  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt ...
	I1206 09:53:44.520773  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt: {Name:mk958624794dd2556a8291c9921b454b157f3c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.520946  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key ...
	I1206 09:53:44.520964  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key: {Name:mk18cc910abc009e83545d8f4f4f90e12f1bb752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.521164  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:44.521215  802078 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:44.521249  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:44.521292  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:44.521333  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:44.521369  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:44.521446  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:44.522095  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:44.543389  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:44.561934  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:44.580049  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:44.597598  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:53:44.616278  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:44.633517  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:44.650699  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:53:44.668505  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:44.687149  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:44.704280  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:44.721767  802078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:44.734395  802078 ssh_runner.go:195] Run: openssl version
	I1206 09:53:44.740626  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.747691  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:44.755054  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758535  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758584  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.794842  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.803151  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.811376  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.819662  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:44.827450  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831929  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831984  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.868641  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:44.877016  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:53:44.884481  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.892077  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:44.899367  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903291  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903338  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.949126  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:44.957677  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:53:44.966120  802078 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:44.970308  802078 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:53:44.970374  802078 kubeadm.go:401] StartCluster: {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:44.970477  802078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:44.970560  802078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:45.001343  802078 cri.go:89] found id: ""
	I1206 09:53:45.001414  802078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:45.012421  802078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:53:45.021194  802078 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:53:45.021261  802078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:53:45.029115  802078 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:53:45.029134  802078 kubeadm.go:158] found existing configuration files:
	
	I1206 09:53:45.029169  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:53:45.037815  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:53:45.037872  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:53:45.045435  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:53:45.053955  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:53:45.054012  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:53:45.062303  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.070593  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:53:45.070643  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.079208  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:53:45.088134  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:53:45.088189  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:53:45.095979  802078 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:53:45.138079  802078 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:53:45.138199  802078 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:53:45.160175  802078 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:53:45.160254  802078 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:53:45.160285  802078 kubeadm.go:319] OS: Linux
	I1206 09:53:45.160352  802078 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:53:45.160443  802078 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:53:45.160554  802078 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:53:45.160647  802078 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:53:45.160734  802078 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:53:45.160812  802078 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:53:45.160892  802078 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:53:45.160962  802078 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:53:45.221322  802078 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:53:45.221523  802078 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:53:45.221688  802078 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:53:45.229074  802078 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:53:40.367608  796626 addons.go:530] duration metric: took 1.818454225s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1206 09:53:41.032946  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:43.533007  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:45.230909  802078 out.go:252]   - Generating certificates and keys ...
	I1206 09:53:45.231011  802078 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:53:45.231124  802078 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:53:45.410620  802078 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:53:45.930986  802078 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:53:46.263989  802078 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:53:46.476019  802078 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:53:46.655346  802078 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:53:46.655593  802078 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.754725  802078 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:53:46.754894  802078 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.832327  802078 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:53:46.992545  802078 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:53:47.179111  802078 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:53:47.179231  802078 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:53:47.446389  802078 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:53:47.805253  802078 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:53:48.039364  802078 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:53:48.570846  802078 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:53:48.856028  802078 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:53:48.856598  802078 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:53:48.860303  802078 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1206 09:53:46.032859  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:48.532015  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:48.862377  802078 out.go:252]   - Booting up control plane ...
	I1206 09:53:48.862492  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:53:48.862569  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:53:48.862631  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:53:48.876239  802078 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:53:48.876432  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:53:48.883203  802078 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:53:48.883360  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:53:48.883405  802078 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:53:48.990510  802078 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:53:48.990684  802078 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:53:50.991982  802078 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001590138s
	I1206 09:53:50.996336  802078 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:53:50.996527  802078 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:53:50.996665  802078 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:53:50.996797  802078 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:53:52.001314  802078 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004895696s
	I1206 09:53:52.920797  802078 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.924413306s
	I1206 09:53:54.497811  802078 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501431632s
	I1206 09:53:54.513719  802078 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:53:54.523571  802078 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:53:54.531849  802078 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:53:54.532153  802078 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-983381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:53:54.540058  802078 kubeadm.go:319] [bootstrap-token] Using token: prjydb.psh7t9q7oigrozcv
	W1206 09:53:51.032320  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:53.032413  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:54.541278  802078 out.go:252]   - Configuring RBAC rules ...
	I1206 09:53:54.541415  802078 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:53:54.544151  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:53:54.548808  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:53:54.551043  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:53:54.553366  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:53:54.556175  802078 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:53:54.904103  802078 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:53:55.318278  802078 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:53:55.904952  802078 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:53:55.906231  802078 kubeadm.go:319] 
	I1206 09:53:55.906359  802078 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:53:55.906379  802078 kubeadm.go:319] 
	I1206 09:53:55.906487  802078 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:53:55.906517  802078 kubeadm.go:319] 
	I1206 09:53:55.906565  802078 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:53:55.906639  802078 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:53:55.906715  802078 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:53:55.906721  802078 kubeadm.go:319] 
	I1206 09:53:55.906789  802078 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:53:55.906795  802078 kubeadm.go:319] 
	I1206 09:53:55.906852  802078 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:53:55.906863  802078 kubeadm.go:319] 
	I1206 09:53:55.906921  802078 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:53:55.907032  802078 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:53:55.907129  802078 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:53:55.907138  802078 kubeadm.go:319] 
	I1206 09:53:55.907270  802078 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:53:55.907380  802078 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:53:55.907390  802078 kubeadm.go:319] 
	I1206 09:53:55.907524  802078 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.907678  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:53:55.907711  802078 kubeadm.go:319] 	--control-plane 
	I1206 09:53:55.907722  802078 kubeadm.go:319] 
	I1206 09:53:55.907839  802078 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:53:55.907848  802078 kubeadm.go:319] 
	I1206 09:53:55.907970  802078 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.908121  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:53:55.911115  802078 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:53:55.911281  802078 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:53:55.911328  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:55.912757  802078 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 06 09:53:16 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:16.820060643Z" level=info msg="Created container 8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v/kubernetes-dashboard" id=9364b38c-a97a-447c-9714-d5e50f265de4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:16 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:16.821626806Z" level=info msg="Starting container: 8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b" id=36ad077b-db8c-4606-85a4-b4ee2c91f9b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:16 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:16.824850396Z" level=info msg="Started container" PID=1733 containerID=8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v/kubernetes-dashboard id=36ad077b-db8c-4606-85a4-b4ee2c91f9b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0aa2450220294f70e701066d5c6f793dd791fe5100d9dee55a3e999f11bccb7b
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.085520815Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97d04ad4-6847-4099-8292-734b33cf9cfe name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.086521829Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f841246b-99b4-4d70-9525-7ad37121718c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.087692852Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper" id=85d51716-eb2c-4143-9e10-11ed45c9e59b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.087845027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.097736767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.098377998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.167325663Z" level=info msg="Created container 37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper" id=85d51716-eb2c-4143-9e10-11ed45c9e59b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.16804189Z" level=info msg="Starting container: 37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501" id=0c8ac657-af53-44b5-9283-281076b8f5c7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.170594792Z" level=info msg="Started container" PID=1759 containerID=37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper id=0c8ac657-af53-44b5-9283-281076b8f5c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6d1f8e343105b0cac908bfe5e48a02dfa576b76b421c628678fcefdd68db100
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.229404396Z" level=info msg="Removing container: e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99" id=15c32d1d-a9dd-4426-8f4d-0a7ea3427826 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.230646458Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=33abc955-b265-4797-bc1a-02836ffa4d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.231772519Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=91713963-0375-4b3d-886d-d1de60c842e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.233670432Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8b0a393e-e5c2-47a8-ba48-055c674b1fb8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.233806105Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.2393148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.23953963Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/76ebadc07af5ab4bb546b2f3a68f892570b1aaac1529ba662f7eb538ecf5c326/merged/etc/passwd: no such file or directory"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.23960133Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/76ebadc07af5ab4bb546b2f3a68f892570b1aaac1529ba662f7eb538ecf5c326/merged/etc/group: no such file or directory"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.239958156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.242791113Z" level=info msg="Removed container e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5/dashboard-metrics-scraper" id=15c32d1d-a9dd-4426-8f4d-0a7ea3427826 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.276747642Z" level=info msg="Created container fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a: kube-system/storage-provisioner/storage-provisioner" id=8b0a393e-e5c2-47a8-ba48-055c674b1fb8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.277828102Z" level=info msg="Starting container: fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a" id=d5fb966f-78a7-4da6-8c66-7b249a7bc624 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:35 default-k8s-diff-port-759696 crio[565]: time="2025-12-06T09:53:35.279740838Z" level=info msg="Started container" PID=1769 containerID=fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a description=kube-system/storage-provisioner/storage-provisioner id=d5fb966f-78a7-4da6-8c66-7b249a7bc624 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6aeb451bc613516f240bdf6e24a876fca2e63b1dfae8208ecf7b8f6598f6daa5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	fb21d006ca9cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   6aeb451bc6135       storage-provisioner                                    kube-system
	37de16197509e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   e6d1f8e343105       dashboard-metrics-scraper-6ffb444bf9-ktwz5             kubernetes-dashboard
	8e7c35c6460d6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   0aa2450220294       kubernetes-dashboard-855c9754f9-tkv7v                  kubernetes-dashboard
	0cfc5bf6ac0e2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   f01b819ca6a21       coredns-66bc5c9577-gpnjq                               kube-system
	0fd27514e3b63       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           56 seconds ago      Running             kube-proxy                  0                   543b62bad9b5b       kube-proxy-jstq5                                       kube-system
	600b10213ef00       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   ddc1a31d647eb       busybox                                                default
	1289e6d7da285       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   6aeb451bc6135       storage-provisioner                                    kube-system
	47b3688c94fe5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   b00778066b638       kindnet-cv6n8                                          kube-system
	49d5db0bf8c81       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   e42efaef3da95       kube-controller-manager-default-k8s-diff-port-759696   kube-system
	2b4e13927c1dd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   a1b1ad14dfee3       kube-scheduler-default-k8s-diff-port-759696            kube-system
	96bf17c21fc5e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   f8ad85e2a00c7       etcd-default-k8s-diff-port-759696                      kube-system
	5081ea10eaf55       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   19d38df27e140       kube-apiserver-default-k8s-diff-port-759696            kube-system
	
	
	==> coredns [0cfc5bf6ac0e2acc0bfc6a44706294eb6472b9a0b6da79d346ae3dfa437729de] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42467 - 37979 "HINFO IN 5694450656926657780.8451394918645071831. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025745246s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-759696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-759696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=default-k8s-diff-port-759696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-759696
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:53:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:53:34 +0000   Sat, 06 Dec 2025 09:52:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-759696
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                66717458-de25-4b46-9089-82e699ed1547
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-gpnjq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-759696                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-cv6n8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-759696             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-759696    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-jstq5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-759696             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ktwz5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tkv7v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-759696 event: Registered Node default-k8s-diff-port-759696 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-759696 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-759696 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node default-k8s-diff-port-759696 event: Registered Node default-k8s-diff-port-759696 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [96bf17c21fc5ef4c1b3dca26666987c3ead355280a820de4ef784becde9de15b] <==
	{"level":"warn","ts":"2025-12-06T09:53:02.909273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.917974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.924348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.930640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.937430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.945039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.952080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.959823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.967747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.977656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.985980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:02.993785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.001676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.014961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.021999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.030066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.037224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.044344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.051351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.057990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.065130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.087773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.094980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.101792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:53:03.155538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56328","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:01 up  2:36,  0 user,  load average: 4.10, 3.34, 3.37
	Linux default-k8s-diff-port-759696 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [47b3688c94fe5f8791be5571032439e7f24a58e707a037c30b8f448c060aafe2] <==
	I1206 09:53:04.537444       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:53:04.537676       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:53:04.537792       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:53:04.537808       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:53:04.537828       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:53:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:53:04.835436       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:53:04.835480       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:53:04.835493       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:53:04.835618       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:53:05.135861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:53:05.135884       1 metrics.go:72] Registering metrics
	I1206 09:53:05.135937       1 controller.go:711] "Syncing nftables rules"
	I1206 09:53:14.835888       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:14.835978       1 main.go:301] handling current node
	I1206 09:53:24.841488       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:24.841544       1 main.go:301] handling current node
	I1206 09:53:34.835776       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:34.835810       1 main.go:301] handling current node
	I1206 09:53:44.836582       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:44.836632       1 main.go:301] handling current node
	I1206 09:53:54.836861       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:53:54.836904       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5081ea10eaf550a1552364d04b9716dd633af5964fac9bc876f2cc1e5ca71b16] <==
	I1206 09:53:03.640520       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:53:03.640531       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:53:03.640734       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:53:03.640534       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:53:03.640853       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:53:03.640861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:53:03.640942       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:53:03.649026       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:53:03.665425       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:53:03.674698       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 09:53:03.674722       1 policy_source.go:240] refreshing policies
	I1206 09:53:03.680509       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:53:03.691726       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:53:03.908709       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:53:03.935267       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:53:03.965027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:53:03.973974       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:53:03.982214       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:53:04.014932       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.49.220"}
	I1206 09:53:04.025715       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.217.234"}
	I1206 09:53:04.544196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:53:06.981055       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:06.981105       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:07.331948       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:53:07.530305       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [49d5db0bf8c817844e681d0c272f78bea45bd7a69be93dbd6b87ce00764c41c3] <==
	I1206 09:53:06.943801       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:53:06.946027       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:53:06.947516       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:53:06.959788       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:53:06.962068       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:53:06.964443       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:53:06.976903       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:53:06.976914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:53:06.977177       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:53:06.977280       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:53:06.977284       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:53:06.977335       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:53:06.977606       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:53:06.987612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:06.993029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:06.997790       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:53:06.999033       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:53:07.003184       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:53:07.005812       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:53:07.009143       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:53:07.011413       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:53:07.020760       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:53:07.026539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:53:07.026557       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:53:07.026565       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0fd27514e3b63af2ee66e57c5e0d3db6ac0f18efcf343dbefc6b2f2e256584f0] <==
	I1206 09:53:04.457309       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:53:04.524317       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:53:04.624645       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:53:04.624831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:53:04.624938       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:53:04.647300       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:53:04.647361       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:53:04.653881       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:53:04.654439       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:53:04.654483       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:04.656181       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:53:04.656261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:53:04.656294       1 config.go:200] "Starting service config controller"
	I1206 09:53:04.656299       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:53:04.656314       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:53:04.656319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:53:04.656355       1 config.go:309] "Starting node config controller"
	I1206 09:53:04.656371       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:53:04.756434       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:53:04.756502       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:53:04.756502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:53:04.757220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2b4e13927c1dd98b75c5d83e4aec397dc2e4749caaf7821cfac821811b1d3da7] <==
	I1206 09:53:02.016503       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:53:03.580595       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:53:03.580651       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:53:03.580665       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:53:03.580674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:53:03.601367       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:53:03.601417       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:03.607263       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:53:03.607582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:03.607647       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:03.607692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:53:03.707907       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:53:07 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:07.717733     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446x8\" (UniqueName: \"kubernetes.io/projected/6b384527-3c93-4f55-839a-bae4f1b854db-kube-api-access-446x8\") pod \"kubernetes-dashboard-855c9754f9-tkv7v\" (UID: \"6b384527-3c93-4f55-839a-bae4f1b854db\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v"
	Dec 06 09:53:07 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:07.717762     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/87d4686a-1882-4a7b-adce-610fbd373a5d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ktwz5\" (UID: \"87d4686a-1882-4a7b-adce-610fbd373a5d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5"
	Dec 06 09:53:11 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:11.155169     726 scope.go:117] "RemoveContainer" containerID="016d01c477f8bc018e19ac9ac703912dbe90708ac8cbaff48e4d80fdd2177ae2"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:12.160238     726 scope.go:117] "RemoveContainer" containerID="016d01c477f8bc018e19ac9ac703912dbe90708ac8cbaff48e4d80fdd2177ae2"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:12.160423     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:12.161309     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:12 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:12.479298     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 06 09:53:13 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:13.165976     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:13 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:13.166159     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:17 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:17.195319     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tkv7v" podStartSLOduration=1.347345699 podStartE2EDuration="10.195296814s" podCreationTimestamp="2025-12-06 09:53:07 +0000 UTC" firstStartedPulling="2025-12-06 09:53:07.930043656 +0000 UTC m=+7.009201620" lastFinishedPulling="2025-12-06 09:53:16.777994783 +0000 UTC m=+15.857152735" observedRunningTime="2025-12-06 09:53:17.194918051 +0000 UTC m=+16.274076018" watchObservedRunningTime="2025-12-06 09:53:17.195296814 +0000 UTC m=+16.274454784"
	Dec 06 09:53:19 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:19.906667     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:19 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:19.906893     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.084936     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.228160     726 scope.go:117] "RemoveContainer" containerID="e16fc1acbc1a3e00c6bb37a331b9b2c9eade4f4612f070ec65b92a573a39aa99"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.228406     726 scope.go:117] "RemoveContainer" containerID="37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:35.228750     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:35 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:35.230129     726 scope.go:117] "RemoveContainer" containerID="1289e6d7da285692d4fa714fc6797eeba4ead826886d680935fba4c4461f6875"
	Dec 06 09:53:39 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:39.906755     726 scope.go:117] "RemoveContainer" containerID="37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	Dec 06 09:53:39 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:39.906977     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:55 default-k8s-diff-port-759696 kubelet[726]: I1206 09:53:55.084898     726 scope.go:117] "RemoveContainer" containerID="37de16197509ec1b3dedade107c1aa509389fa3f1ca91dc44bbd1bde30413501"
	Dec 06 09:53:55 default-k8s-diff-port-759696 kubelet[726]: E1206 09:53:55.085131     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ktwz5_kubernetes-dashboard(87d4686a-1882-4a7b-adce-610fbd373a5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ktwz5" podUID="87d4686a-1882-4a7b-adce-610fbd373a5d"
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:53:56 default-k8s-diff-port-759696 systemd[1]: kubelet.service: Consumed 1.824s CPU time.
	
	
	==> kubernetes-dashboard [8e7c35c6460d6dde825702099c5cd8dc4d972b97f6f5de41d29064a559c7649b] <==
	2025/12/06 09:53:16 Using namespace: kubernetes-dashboard
	2025/12/06 09:53:16 Using in-cluster config to connect to apiserver
	2025/12/06 09:53:16 Using secret token for csrf signing
	2025/12/06 09:53:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:53:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:53:16 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:53:16 Generating JWE encryption key
	2025/12/06 09:53:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:53:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:53:16 Initializing JWE encryption key from synchronized object
	2025/12/06 09:53:16 Creating in-cluster Sidecar client
	2025/12/06 09:53:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:53:16 Serving insecurely on HTTP port: 9090
	2025/12/06 09:53:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:53:16 Starting overwatch
	
	
	==> storage-provisioner [1289e6d7da285692d4fa714fc6797eeba4ead826886d680935fba4c4461f6875] <==
	I1206 09:53:04.421756       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:53:34.424547       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fb21d006ca9cf8322ae539b46315ce541bd384cf9e6845d9bc68e5beaf17605a] <==
	I1206 09:53:35.291957       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:53:35.300028       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:53:35.300083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:53:35.302793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:38.759407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:43.024447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:46.623226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:49.677115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:52.699843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:52.705223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:52.705367       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:53:52.705582       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759696_0ffa14f0-2b59-4526-93c8-04417567bbe6!
	I1206 09:53:52.705610       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"319c0bdb-ab5a-4a15-8303-dcd154877547", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-759696_0ffa14f0-2b59-4526-93c8-04417567bbe6 became leader
	W1206 09:53:52.707319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:52.711269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:52.805742       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759696_0ffa14f0-2b59-4526-93c8-04417567bbe6!
	W1206 09:53:54.713908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:54.719110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:56.723722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:56.729300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:58.732730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:58.736616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:54:00.740342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:54:00.745983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696: exit status 2 (377.599022ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-759696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-997968 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-997968 --alsologtostderr -v=1: exit status 80 (2.408951727s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-997968 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:53:56.728101  805495 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:56.728258  805495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:56.728268  805495 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:56.728272  805495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:56.728559  805495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:56.728913  805495 out.go:368] Setting JSON to false
	I1206 09:53:56.728941  805495 mustload.go:66] Loading cluster: embed-certs-997968
	I1206 09:53:56.729444  805495 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:56.729935  805495 cli_runner.go:164] Run: docker container inspect embed-certs-997968 --format={{.State.Status}}
	I1206 09:53:56.751673  805495 host.go:66] Checking if "embed-certs-997968" exists ...
	I1206 09:53:56.752019  805495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:56.846712  805495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-06 09:53:56.83434814 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:56.847531  805495 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-997968 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:53:56.849320  805495 out.go:179] * Pausing node embed-certs-997968 ... 
	I1206 09:53:56.850404  805495 host.go:66] Checking if "embed-certs-997968" exists ...
	I1206 09:53:56.850768  805495 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:56.850818  805495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-997968
	I1206 09:53:56.870012  805495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/embed-certs-997968/id_rsa Username:docker}
	I1206 09:53:56.966624  805495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:56.980757  805495 pause.go:52] kubelet running: true
	I1206 09:53:56.980829  805495 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:57.154251  805495 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:57.154332  805495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:57.223514  805495 cri.go:89] found id: "b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b"
	I1206 09:53:57.223535  805495 cri.go:89] found id: "7cab4e729fa4fbf88d02cc827d35d3b458ec55221475e3d84901c71b0aaffabd"
	I1206 09:53:57.223539  805495 cri.go:89] found id: "7c3a2deb09c8c337db0be2cf134ecc3f8dc26a79db21ff5911915b272f23ebec"
	I1206 09:53:57.223542  805495 cri.go:89] found id: "d5f15fc411f8e34d0fe7d52849aaf1d7a447d0b42b610ca92f5e65f54ca33b72"
	I1206 09:53:57.223560  805495 cri.go:89] found id: "cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541"
	I1206 09:53:57.223564  805495 cri.go:89] found id: "f0c346e2ecb8689cc659d92dd982e72bea92df80d9c19d6fe9b36590adae4c5d"
	I1206 09:53:57.223566  805495 cri.go:89] found id: "ccbdbea6e31f77d77210cb56e75d243da8b87d3a1bba9fb48502f886fe7cc436"
	I1206 09:53:57.223569  805495 cri.go:89] found id: "9567c8724e7902114f90b0bfd9aeaba8475dd4c7fdffc2b71b9794b8d2429d02"
	I1206 09:53:57.223572  805495 cri.go:89] found id: "aea22bcd770b685f5b36f548f9387928f647a3eb4b9ecbbe8f9c4b71394765c0"
	I1206 09:53:57.223578  805495 cri.go:89] found id: "7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	I1206 09:53:57.223584  805495 cri.go:89] found id: "65c20f28324841a573607a02aa9b5804867835a7e2ec696ee719ec51845d6c3f"
	I1206 09:53:57.223586  805495 cri.go:89] found id: ""
	I1206 09:53:57.223624  805495 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:57.236600  805495 retry.go:31] will retry after 203.649581ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:57Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:57.440853  805495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:57.454070  805495 pause.go:52] kubelet running: false
	I1206 09:53:57.454135  805495 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:57.608427  805495 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:57.608535  805495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:57.680521  805495 cri.go:89] found id: "b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b"
	I1206 09:53:57.680549  805495 cri.go:89] found id: "7cab4e729fa4fbf88d02cc827d35d3b458ec55221475e3d84901c71b0aaffabd"
	I1206 09:53:57.680555  805495 cri.go:89] found id: "7c3a2deb09c8c337db0be2cf134ecc3f8dc26a79db21ff5911915b272f23ebec"
	I1206 09:53:57.680561  805495 cri.go:89] found id: "d5f15fc411f8e34d0fe7d52849aaf1d7a447d0b42b610ca92f5e65f54ca33b72"
	I1206 09:53:57.680566  805495 cri.go:89] found id: "cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541"
	I1206 09:53:57.680571  805495 cri.go:89] found id: "f0c346e2ecb8689cc659d92dd982e72bea92df80d9c19d6fe9b36590adae4c5d"
	I1206 09:53:57.680576  805495 cri.go:89] found id: "ccbdbea6e31f77d77210cb56e75d243da8b87d3a1bba9fb48502f886fe7cc436"
	I1206 09:53:57.680581  805495 cri.go:89] found id: "9567c8724e7902114f90b0bfd9aeaba8475dd4c7fdffc2b71b9794b8d2429d02"
	I1206 09:53:57.680590  805495 cri.go:89] found id: "aea22bcd770b685f5b36f548f9387928f647a3eb4b9ecbbe8f9c4b71394765c0"
	I1206 09:53:57.680600  805495 cri.go:89] found id: "7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	I1206 09:53:57.680608  805495 cri.go:89] found id: "65c20f28324841a573607a02aa9b5804867835a7e2ec696ee719ec51845d6c3f"
	I1206 09:53:57.680612  805495 cri.go:89] found id: ""
	I1206 09:53:57.680662  805495 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:57.694931  805495 retry.go:31] will retry after 286.343114ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:57Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:57.982514  805495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:57.996950  805495 pause.go:52] kubelet running: false
	I1206 09:53:57.997001  805495 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:58.161506  805495 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:58.161572  805495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:58.242349  805495 cri.go:89] found id: "b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b"
	I1206 09:53:58.242372  805495 cri.go:89] found id: "7cab4e729fa4fbf88d02cc827d35d3b458ec55221475e3d84901c71b0aaffabd"
	I1206 09:53:58.242377  805495 cri.go:89] found id: "7c3a2deb09c8c337db0be2cf134ecc3f8dc26a79db21ff5911915b272f23ebec"
	I1206 09:53:58.242382  805495 cri.go:89] found id: "d5f15fc411f8e34d0fe7d52849aaf1d7a447d0b42b610ca92f5e65f54ca33b72"
	I1206 09:53:58.242385  805495 cri.go:89] found id: "cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541"
	I1206 09:53:58.242389  805495 cri.go:89] found id: "f0c346e2ecb8689cc659d92dd982e72bea92df80d9c19d6fe9b36590adae4c5d"
	I1206 09:53:58.242393  805495 cri.go:89] found id: "ccbdbea6e31f77d77210cb56e75d243da8b87d3a1bba9fb48502f886fe7cc436"
	I1206 09:53:58.242397  805495 cri.go:89] found id: "9567c8724e7902114f90b0bfd9aeaba8475dd4c7fdffc2b71b9794b8d2429d02"
	I1206 09:53:58.242401  805495 cri.go:89] found id: "aea22bcd770b685f5b36f548f9387928f647a3eb4b9ecbbe8f9c4b71394765c0"
	I1206 09:53:58.242411  805495 cri.go:89] found id: "7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	I1206 09:53:58.242419  805495 cri.go:89] found id: "65c20f28324841a573607a02aa9b5804867835a7e2ec696ee719ec51845d6c3f"
	I1206 09:53:58.242424  805495 cri.go:89] found id: ""
	I1206 09:53:58.242514  805495 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:58.255941  805495 retry.go:31] will retry after 528.228926ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:58Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:53:58.784495  805495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:53:58.799500  805495 pause.go:52] kubelet running: false
	I1206 09:53:58.799560  805495 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:53:58.972368  805495 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:53:58.972542  805495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:53:59.047553  805495 cri.go:89] found id: "b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b"
	I1206 09:53:59.047579  805495 cri.go:89] found id: "7cab4e729fa4fbf88d02cc827d35d3b458ec55221475e3d84901c71b0aaffabd"
	I1206 09:53:59.047586  805495 cri.go:89] found id: "7c3a2deb09c8c337db0be2cf134ecc3f8dc26a79db21ff5911915b272f23ebec"
	I1206 09:53:59.047591  805495 cri.go:89] found id: "d5f15fc411f8e34d0fe7d52849aaf1d7a447d0b42b610ca92f5e65f54ca33b72"
	I1206 09:53:59.047596  805495 cri.go:89] found id: "cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541"
	I1206 09:53:59.047611  805495 cri.go:89] found id: "f0c346e2ecb8689cc659d92dd982e72bea92df80d9c19d6fe9b36590adae4c5d"
	I1206 09:53:59.047616  805495 cri.go:89] found id: "ccbdbea6e31f77d77210cb56e75d243da8b87d3a1bba9fb48502f886fe7cc436"
	I1206 09:53:59.047621  805495 cri.go:89] found id: "9567c8724e7902114f90b0bfd9aeaba8475dd4c7fdffc2b71b9794b8d2429d02"
	I1206 09:53:59.047626  805495 cri.go:89] found id: "aea22bcd770b685f5b36f548f9387928f647a3eb4b9ecbbe8f9c4b71394765c0"
	I1206 09:53:59.047636  805495 cri.go:89] found id: "7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	I1206 09:53:59.047641  805495 cri.go:89] found id: "65c20f28324841a573607a02aa9b5804867835a7e2ec696ee719ec51845d6c3f"
	I1206 09:53:59.047645  805495 cri.go:89] found id: ""
	I1206 09:53:59.047720  805495 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:53:59.063427  805495 out.go:203] 
	W1206 09:53:59.064493  805495 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:53:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:53:59.064514  805495 out.go:285] * 
	* 
	W1206 09:53:59.068710  805495 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:53:59.069987  805495 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-997968 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-997968
helpers_test.go:243: (dbg) docker inspect embed-certs-997968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062",
	        "Created": "2025-12-06T09:51:52.675095642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 792788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:53:01.737101155Z",
	            "FinishedAt": "2025-12-06T09:53:00.68511387Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/hosts",
	        "LogPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062-json.log",
	        "Name": "/embed-certs-997968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-997968:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-997968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062",
	                "LowerDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-997968",
	                "Source": "/var/lib/docker/volumes/embed-certs-997968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-997968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-997968",
	                "name.minikube.sigs.k8s.io": "embed-certs-997968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bae149f05228d50f6c0260de71549fc3f67b5b8013514395d5d5c5600e764ea3",
	            "SandboxKey": "/var/run/docker/netns/bae149f05228",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33230"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33228"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33229"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-997968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d9447c39c3ca701200d25c23e931e64eec9678dd597d8d4ca10d4b524dddd69",
	                    "EndpointID": "698bcdd3ab588f010bdf59409e90c564460380aa1b2d102265a23234966e41bc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "fa:0c:80:e7:9d:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-997968",
	                        "0e3f6d38a916"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968: exit status 2 (357.446674ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-997968 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-997968 logs -n 25: (1.259380606s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-983381                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ no-preload-521770 image list --format=json                                                                                                                                                                                                           │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p no-preload-521770 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p kindnet-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-983381               │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ default-k8s-diff-port-759696 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p default-k8s-diff-port-759696 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ embed-certs-997968 image list --format=json                                                                                                                                                                                                          │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-997968 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:35.736114  802078 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:35.736358  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736366  802078 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:35.736370  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736608  802078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:35.737088  802078 out.go:368] Setting JSON to false
	I1206 09:53:35.738323  802078 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9360,"bootTime":1765005456,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:35.738388  802078 start.go:143] virtualization: kvm guest
	I1206 09:53:35.740317  802078 out.go:179] * [kindnet-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:35.741422  802078 notify.go:221] Checking for updates...
	I1206 09:53:35.741506  802078 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:35.742495  802078 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:35.743616  802078 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:35.744630  802078 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:35.745749  802078 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:35.746924  802078 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:35.748304  802078 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748393  802078 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748491  802078 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748589  802078 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:35.772982  802078 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:35.773088  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.830680  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.820532325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.830809  802078 docker.go:319] overlay module found
	I1206 09:53:35.832481  802078 out.go:179] * Using the docker driver based on user configuration
	I1206 09:53:35.833543  802078 start.go:309] selected driver: docker
	I1206 09:53:35.833558  802078 start.go:927] validating driver "docker" against <nil>
	I1206 09:53:35.833571  802078 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:35.834109  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.894209  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.883098075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.894359  802078 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:53:35.894710  802078 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:35.896340  802078 out.go:179] * Using Docker driver with root privileges
	I1206 09:53:35.897286  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:35.897302  802078 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:53:35.897380  802078 start.go:353] cluster config:
	{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:35.898495  802078 out.go:179] * Starting "kindnet-983381" primary control-plane node in "kindnet-983381" cluster
	I1206 09:53:35.899494  802078 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:35.900543  802078 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:35.901765  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:35.901802  802078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:35.901815  802078 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:35.901854  802078 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:35.901908  802078 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:35.901922  802078 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:35.902031  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:35.902059  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json: {Name:mk3a79de74bde68ec31b151eacb622c73b38daf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:35.924146  802078 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:35.924170  802078 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:35.924185  802078 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:35.924214  802078 start.go:360] acquireMachinesLock for kindnet-983381: {Name:mk6e4785105686f4f72d41f8081d2646bcdec596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:35.924309  802078 start.go:364] duration metric: took 76.057µs to acquireMachinesLock for "kindnet-983381"
	I1206 09:53:35.924331  802078 start.go:93] Provisioning new machine with config: &{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:35.924423  802078 start.go:125] createHost starting for "" (driver="docker")
	W1206 09:53:33.030174  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:35.530789  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:35.464753  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:35.964668  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.464301  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.964782  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.464682  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.965001  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.464290  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.545704  796626 kubeadm.go:1114] duration metric: took 4.655385618s to wait for elevateKubeSystemPrivileges
	I1206 09:53:38.545746  796626 kubeadm.go:403] duration metric: took 16.212898927s to StartCluster
	I1206 09:53:38.545772  796626 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.545859  796626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:38.548341  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.548970  796626 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:38.548998  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:53:38.549213  796626 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:38.549132  796626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:53:38.549388  796626 addons.go:70] Setting storage-provisioner=true in profile "auto-983381"
	I1206 09:53:38.549417  796626 addons.go:239] Setting addon storage-provisioner=true in "auto-983381"
	I1206 09:53:38.549483  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.550037  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.549392  796626 addons.go:70] Setting default-storageclass=true in profile "auto-983381"
	I1206 09:53:38.550507  796626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-983381"
	I1206 09:53:38.550833  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.550926  796626 out.go:179] * Verifying Kubernetes components...
	I1206 09:53:38.553410  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:38.588099  796626 addons.go:239] Setting addon default-storageclass=true in "auto-983381"
	I1206 09:53:38.588150  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.588574  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.617638  796626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.617665  796626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:53:38.617866  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.626643  796626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1206 09:53:35.198662  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:37.200822  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:38.638810  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.668107  796626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:38.668132  796626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:38.668195  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.675658  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:53:38.692832  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.702368  796626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:38.748795  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.801277  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:39.027533  796626 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:53:39.029182  796626 node_ready.go:35] waiting up to 15m0s for node "auto-983381" to be "Ready" ...
	I1206 09:53:39.654289  796626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-983381" context rescaled to 1 replicas
	I1206 09:53:40.345178  796626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.543850325s)
	I1206 09:53:40.346941  796626 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1206 09:53:35.925937  802078 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:53:35.926209  802078 start.go:159] libmachine.API.Create for "kindnet-983381" (driver="docker")
	I1206 09:53:35.926252  802078 client.go:173] LocalClient.Create starting
	I1206 09:53:35.926340  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:53:35.926378  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926405  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926506  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:53:35.926535  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926552  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926986  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:53:35.943608  802078 cli_runner.go:211] docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:53:35.943723  802078 network_create.go:284] running [docker network inspect kindnet-983381] to gather additional debugging logs...
	I1206 09:53:35.943748  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381
	W1206 09:53:35.960448  802078 cli_runner.go:211] docker network inspect kindnet-983381 returned with exit code 1
	I1206 09:53:35.960495  802078 network_create.go:287] error running [docker network inspect kindnet-983381]: docker network inspect kindnet-983381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-983381 not found
	I1206 09:53:35.960514  802078 network_create.go:289] output of [docker network inspect kindnet-983381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-983381 not found
	
	** /stderr **
	I1206 09:53:35.960636  802078 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:35.980401  802078 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:53:35.981149  802078 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:53:35.981925  802078 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:53:35.982560  802078 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fadb45f2248d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:97:af:e5:cc:0b} reservation:<nil>}
	I1206 09:53:35.983088  802078 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5d9447c39c3c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e2:61:5e:c6:7b:21} reservation:<nil>}
	I1206 09:53:35.983881  802078 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e6b5e0}
	I1206 09:53:35.983907  802078 network_create.go:124] attempt to create docker network kindnet-983381 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:53:35.983952  802078 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-983381 kindnet-983381
	I1206 09:53:36.037591  802078 network_create.go:108] docker network kindnet-983381 192.168.94.0/24 created
	I1206 09:53:36.037621  802078 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-983381" container
	I1206 09:53:36.037678  802078 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:53:36.055484  802078 cli_runner.go:164] Run: docker volume create kindnet-983381 --label name.minikube.sigs.k8s.io=kindnet-983381 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:53:36.074528  802078 oci.go:103] Successfully created a docker volume kindnet-983381
	I1206 09:53:36.074605  802078 cli_runner.go:164] Run: docker run --rm --name kindnet-983381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --entrypoint /usr/bin/test -v kindnet-983381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:53:36.495904  802078 oci.go:107] Successfully prepared a docker volume kindnet-983381
	I1206 09:53:36.495988  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:36.496004  802078 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:53:36.496085  802078 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:53:40.493659  802078 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.997500219s)
	I1206 09:53:40.493705  802078 kic.go:203] duration metric: took 3.997696888s to extract preloaded images to volume ...
	W1206 09:53:40.493857  802078 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:53:40.493908  802078 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:53:40.493960  802078 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:53:40.553379  802078 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-983381 --name kindnet-983381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-983381 --network kindnet-983381 --ip 192.168.94.2 --volume kindnet-983381:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	W1206 09:53:37.530880  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:39.530936  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:40.844704  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Running}}
	I1206 09:53:40.865257  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:40.884729  802078 cli_runner.go:164] Run: docker exec kindnet-983381 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:53:40.933934  802078 oci.go:144] the created container "kindnet-983381" has a running status.
	I1206 09:53:40.933992  802078 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa...
	I1206 09:53:41.065963  802078 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:53:41.097932  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.118699  802078 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:53:41.118719  802078 kic_runner.go:114] Args: [docker exec --privileged kindnet-983381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:53:41.177800  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.202566  802078 machine.go:94] provisionDockerMachine start ...
	I1206 09:53:41.202682  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.226294  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.226976  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.227014  802078 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:53:41.366826  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.366854  802078 ubuntu.go:182] provisioning hostname "kindnet-983381"
	I1206 09:53:41.366930  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.388560  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.388853  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.388868  802078 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-983381 && echo "kindnet-983381" | sudo tee /etc/hostname
	I1206 09:53:41.533194  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.533282  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.553319  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.553612  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.553649  802078 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-983381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-983381/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-983381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:53:41.687391  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:53:41.687422  802078 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:53:41.687496  802078 ubuntu.go:190] setting up certificates
	I1206 09:53:41.687511  802078 provision.go:84] configureAuth start
	I1206 09:53:41.687570  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:41.706986  802078 provision.go:143] copyHostCerts
	I1206 09:53:41.707057  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:53:41.707070  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:53:41.707141  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:53:41.707232  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:53:41.707242  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:53:41.707269  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:53:41.707336  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:53:41.707343  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:53:41.707366  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:53:41.707413  802078 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.kindnet-983381 san=[127.0.0.1 192.168.94.2 kindnet-983381 localhost minikube]
	I1206 09:53:41.806395  802078 provision.go:177] copyRemoteCerts
	I1206 09:53:41.806477  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:53:41.806526  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.825939  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:41.922925  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1206 09:53:41.943043  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:53:41.962026  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:53:41.980803  802078 provision.go:87] duration metric: took 293.274301ms to configureAuth
	I1206 09:53:41.980839  802078 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:53:41.981030  802078 config.go:182] Loaded profile config "kindnet-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:41.981180  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.001023  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:42.001294  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:42.001312  802078 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:53:42.284104  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:53:42.284126  802078 machine.go:97] duration metric: took 1.081535088s to provisionDockerMachine
	I1206 09:53:42.284136  802078 client.go:176] duration metric: took 6.35787804s to LocalClient.Create
	I1206 09:53:42.284158  802078 start.go:167] duration metric: took 6.357949811s to libmachine.API.Create "kindnet-983381"
	I1206 09:53:42.284171  802078 start.go:293] postStartSetup for "kindnet-983381" (driver="docker")
	I1206 09:53:42.284188  802078 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:53:42.284255  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:53:42.284310  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.302172  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.400807  802078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:53:42.404744  802078 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:53:42.404778  802078 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:53:42.404792  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:53:42.404846  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:53:42.404962  802078 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:53:42.405098  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:53:42.414846  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:42.437157  802078 start.go:296] duration metric: took 152.966336ms for postStartSetup
	I1206 09:53:42.437535  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.455950  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:42.456172  802078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:53:42.456212  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.474118  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.568913  802078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:53:42.573671  802078 start.go:128] duration metric: took 6.649231212s to createHost
	I1206 09:53:42.573696  802078 start.go:83] releasing machines lock for "kindnet-983381", held for 6.649375377s
	I1206 09:53:42.573776  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.593508  802078 ssh_runner.go:195] Run: cat /version.json
	I1206 09:53:42.593569  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.593516  802078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:53:42.593700  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.611419  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.612544  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.760558  802078 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:42.767177  802078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:53:42.803261  802078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:53:42.807859  802078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:53:42.807927  802078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:53:42.833482  802078 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:53:42.833508  802078 start.go:496] detecting cgroup driver to use...
	I1206 09:53:42.833546  802078 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:53:42.833599  802078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:53:42.849782  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:53:42.862043  802078 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:53:42.862089  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:53:42.879135  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:53:42.898925  802078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:53:42.987951  802078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:53:43.078627  802078 docker.go:234] disabling docker service ...
	I1206 09:53:43.078699  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:53:43.100368  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:53:43.113370  802078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:53:43.201176  802078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:53:43.294081  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:53:43.307631  802078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:53:43.321801  802078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:53:43.321856  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.331480  802078 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:53:43.331547  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.340367  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.349027  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.357421  802078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:53:43.365342  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.375704  802078 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.390512  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.399512  802078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:53:43.406601  802078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:53:43.413755  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:43.497188  802078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:53:43.639801  802078 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:53:43.639882  802078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:53:43.644037  802078 start.go:564] Will wait 60s for crictl version
	I1206 09:53:43.644085  802078 ssh_runner.go:195] Run: which crictl
	I1206 09:53:43.647878  802078 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:53:43.673703  802078 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:53:43.673775  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.703877  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.733325  802078 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:53:39.697901  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:41.698088  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:42.698368  789560 pod_ready.go:94] pod "coredns-66bc5c9577-gpnjq" is "Ready"
	I1206 09:53:42.698400  789560 pod_ready.go:86] duration metric: took 37.505994586s for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.700901  789560 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.705139  789560 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.705165  789560 pod_ready.go:86] duration metric: took 4.236162ms for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.707252  789560 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.711008  789560 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.711028  789560 pod_ready.go:86] duration metric: took 3.752374ms for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.713026  789560 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.897101  789560 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.897138  789560 pod_ready.go:86] duration metric: took 184.092641ms for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.097595  789560 pod_ready.go:83] waiting for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.497251  789560 pod_ready.go:94] pod "kube-proxy-jstq5" is "Ready"
	I1206 09:53:43.497282  789560 pod_ready.go:86] duration metric: took 399.656581ms for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.697290  789560 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096580  789560 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:44.096611  789560 pod_ready.go:86] duration metric: took 399.289382ms for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096627  789560 pod_ready.go:40] duration metric: took 38.907883012s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.141173  789560 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.143056  789560 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759696" cluster and "default" namespace by default
	W1206 09:53:42.029753  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:43.530143  792441 pod_ready.go:94] pod "coredns-66bc5c9577-kw8nl" is "Ready"
	I1206 09:53:43.530177  792441 pod_ready.go:86] duration metric: took 31.006235572s for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.532504  792441 pod_ready.go:83] waiting for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.539381  792441 pod_ready.go:94] pod "etcd-embed-certs-997968" is "Ready"
	I1206 09:53:43.539408  792441 pod_ready.go:86] duration metric: took 6.868509ms for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.541690  792441 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.545546  792441 pod_ready.go:94] pod "kube-apiserver-embed-certs-997968" is "Ready"
	I1206 09:53:43.545571  792441 pod_ready.go:86] duration metric: took 3.85484ms for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.547358  792441 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.728143  792441 pod_ready.go:94] pod "kube-controller-manager-embed-certs-997968" is "Ready"
	I1206 09:53:43.728172  792441 pod_ready.go:86] duration metric: took 180.793456ms for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.928272  792441 pod_ready.go:83] waiting for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.328082  792441 pod_ready.go:94] pod "kube-proxy-m2zpr" is "Ready"
	I1206 09:53:44.328117  792441 pod_ready.go:86] duration metric: took 399.817969ms for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.528776  792441 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927733  792441 pod_ready.go:94] pod "kube-scheduler-embed-certs-997968" is "Ready"
	I1206 09:53:44.927763  792441 pod_ready.go:86] duration metric: took 398.958608ms for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927778  792441 pod_ready.go:40] duration metric: took 32.40863001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.980591  792441 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.982680  792441 out.go:179] * Done! kubectl is now configured to use "embed-certs-997968" cluster and "default" namespace by default
	I1206 09:53:43.734370  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:43.751497  802078 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:43.755659  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.765989  802078 kubeadm.go:884] updating cluster {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:43.766104  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:43.766146  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.799525  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.799546  802078 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:43.799590  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.825735  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.825758  802078 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:43.825766  802078 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1206 09:53:43.825861  802078 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-983381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1206 09:53:43.825926  802078 ssh_runner.go:195] Run: crio config
	I1206 09:53:43.872261  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:43.872292  802078 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:43.872313  802078 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-983381 NodeName:kindnet-983381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:43.872443  802078 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-983381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:43.872538  802078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:43.881153  802078 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:43.881224  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:43.889391  802078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1206 09:53:43.903305  802078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:43.918720  802078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 09:53:43.931394  802078 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:43.935089  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.944888  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:44.030422  802078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:44.054751  802078 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381 for IP: 192.168.94.2
	I1206 09:53:44.054774  802078 certs.go:195] generating shared ca certs ...
	I1206 09:53:44.054796  802078 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.054979  802078 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:44.055055  802078 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:44.055074  802078 certs.go:257] generating profile certs ...
	I1206 09:53:44.055148  802078 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key
	I1206 09:53:44.055166  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt with IP's: []
	I1206 09:53:44.179136  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt ...
	I1206 09:53:44.179163  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt: {Name:mkbed0739e68db5951cd1670ef77a82b17aedb26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179330  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key ...
	I1206 09:53:44.179342  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key: {Name:mk3e2c0a04a2e3e8f578932802d27c8b90d53860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179422  802078 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3
	I1206 09:53:44.179436  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:53:44.342441  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 ...
	I1206 09:53:44.342476  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3: {Name:mk0af4503346333895c5c579d4fb2a8c9dcfdcee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342649  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 ...
	I1206 09:53:44.342667  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3: {Name:mk0a4a58e7f8845d02778448b3e5355101c2e3fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342770  802078 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt
	I1206 09:53:44.342868  802078 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key
	I1206 09:53:44.342951  802078 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key
	I1206 09:53:44.342972  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt with IP's: []
	I1206 09:53:44.520746  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt ...
	I1206 09:53:44.520773  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt: {Name:mk958624794dd2556a8291c9921b454b157f3c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.520946  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key ...
	I1206 09:53:44.520964  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key: {Name:mk18cc910abc009e83545d8f4f4f90e12f1bb752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.521164  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:44.521215  802078 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:44.521249  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:44.521292  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:44.521333  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:44.521369  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:44.521446  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:44.522095  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:44.543389  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:44.561934  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:44.580049  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:44.597598  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:53:44.616278  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:44.633517  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:44.650699  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:53:44.668505  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:44.687149  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:44.704280  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:44.721767  802078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:44.734395  802078 ssh_runner.go:195] Run: openssl version
	I1206 09:53:44.740626  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.747691  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:44.755054  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758535  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758584  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.794842  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.803151  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.811376  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.819662  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:44.827450  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831929  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831984  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.868641  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:44.877016  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:53:44.884481  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.892077  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:44.899367  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903291  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903338  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.949126  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:44.957677  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:53:44.966120  802078 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:44.970308  802078 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:53:44.970374  802078 kubeadm.go:401] StartCluster: {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:44.970477  802078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:44.970560  802078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:45.001343  802078 cri.go:89] found id: ""
	I1206 09:53:45.001414  802078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:45.012421  802078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:53:45.021194  802078 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:53:45.021261  802078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:53:45.029115  802078 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:53:45.029134  802078 kubeadm.go:158] found existing configuration files:
	
	I1206 09:53:45.029169  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:53:45.037815  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:53:45.037872  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:53:45.045435  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:53:45.053955  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:53:45.054012  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:53:45.062303  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.070593  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:53:45.070643  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.079208  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:53:45.088134  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:53:45.088189  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:53:45.095979  802078 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:53:45.138079  802078 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:53:45.138199  802078 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:53:45.160175  802078 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:53:45.160254  802078 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:53:45.160285  802078 kubeadm.go:319] OS: Linux
	I1206 09:53:45.160352  802078 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:53:45.160443  802078 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:53:45.160554  802078 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:53:45.160647  802078 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:53:45.160734  802078 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:53:45.160812  802078 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:53:45.160892  802078 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:53:45.160962  802078 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:53:45.221322  802078 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:53:45.221523  802078 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:53:45.221688  802078 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:53:45.229074  802078 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:53:40.367608  796626 addons.go:530] duration metric: took 1.818454225s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1206 09:53:41.032946  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:43.533007  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:45.230909  802078 out.go:252]   - Generating certificates and keys ...
	I1206 09:53:45.231011  802078 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:53:45.231124  802078 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:53:45.410620  802078 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:53:45.930986  802078 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:53:46.263989  802078 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:53:46.476019  802078 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:53:46.655346  802078 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:53:46.655593  802078 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.754725  802078 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:53:46.754894  802078 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.832327  802078 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:53:46.992545  802078 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:53:47.179111  802078 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:53:47.179231  802078 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:53:47.446389  802078 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:53:47.805253  802078 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:53:48.039364  802078 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:53:48.570846  802078 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:53:48.856028  802078 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:53:48.856598  802078 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:53:48.860303  802078 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1206 09:53:46.032859  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:48.532015  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:48.862377  802078 out.go:252]   - Booting up control plane ...
	I1206 09:53:48.862492  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:53:48.862569  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:53:48.862631  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:53:48.876239  802078 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:53:48.876432  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:53:48.883203  802078 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:53:48.883360  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:53:48.883405  802078 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:53:48.990510  802078 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:53:48.990684  802078 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:53:50.991982  802078 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001590138s
	I1206 09:53:50.996336  802078 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:53:50.996527  802078 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:53:50.996665  802078 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:53:50.996797  802078 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:53:52.001314  802078 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004895696s
	I1206 09:53:52.920797  802078 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.924413306s
	I1206 09:53:54.497811  802078 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501431632s
	I1206 09:53:54.513719  802078 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:53:54.523571  802078 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:53:54.531849  802078 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:53:54.532153  802078 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-983381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:53:54.540058  802078 kubeadm.go:319] [bootstrap-token] Using token: prjydb.psh7t9q7oigrozcv
	W1206 09:53:51.032320  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:53.032413  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:54.541278  802078 out.go:252]   - Configuring RBAC rules ...
	I1206 09:53:54.541415  802078 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:53:54.544151  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:53:54.548808  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:53:54.551043  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:53:54.553366  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:53:54.556175  802078 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:53:54.904103  802078 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:53:55.318278  802078 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:53:55.904952  802078 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:53:55.906231  802078 kubeadm.go:319] 
	I1206 09:53:55.906359  802078 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:53:55.906379  802078 kubeadm.go:319] 
	I1206 09:53:55.906487  802078 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:53:55.906517  802078 kubeadm.go:319] 
	I1206 09:53:55.906565  802078 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:53:55.906639  802078 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:53:55.906715  802078 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:53:55.906721  802078 kubeadm.go:319] 
	I1206 09:53:55.906789  802078 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:53:55.906795  802078 kubeadm.go:319] 
	I1206 09:53:55.906852  802078 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:53:55.906863  802078 kubeadm.go:319] 
	I1206 09:53:55.906921  802078 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:53:55.907032  802078 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:53:55.907129  802078 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:53:55.907138  802078 kubeadm.go:319] 
	I1206 09:53:55.907270  802078 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:53:55.907380  802078 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:53:55.907390  802078 kubeadm.go:319] 
	I1206 09:53:55.907524  802078 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.907678  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:53:55.907711  802078 kubeadm.go:319] 	--control-plane 
	I1206 09:53:55.907722  802078 kubeadm.go:319] 
	I1206 09:53:55.907839  802078 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:53:55.907848  802078 kubeadm.go:319] 
	I1206 09:53:55.907970  802078 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.908121  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:53:55.911115  802078 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:53:55.911281  802078 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:53:55.911328  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:55.912757  802078 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 06 09:53:41 embed-certs-997968 crio[564]: time="2025-12-06T09:53:41.36087178Z" level=info msg="Removing container: 48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96" id=63ce59a3-d0b2-4483-8573-a4b0389d7907 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:41 embed-certs-997968 crio[564]: time="2025-12-06T09:53:41.372733429Z" level=info msg="Removed container 48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b/dashboard-metrics-scraper" id=63ce59a3-d0b2-4483-8573-a4b0389d7907 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.364094727Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8465008f-1503-492f-8b13-0497ef70845f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.365071328Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ef0fe62-affe-46df-b96c-d08c9c2479a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.366142063Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4e1fda57-e9c0-4475-b7d3-eb5e20428679 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.366293596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.373882647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.374121513Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e721eb050fddd4c4d9e8d7e1b38c6efc61d9219ecb08fa5f133ee73a933bdc86/merged/etc/passwd: no such file or directory"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.374163699Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e721eb050fddd4c4d9e8d7e1b38c6efc61d9219ecb08fa5f133ee73a933bdc86/merged/etc/group: no such file or directory"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.374440581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.41024507Z" level=info msg="Created container b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b: kube-system/storage-provisioner/storage-provisioner" id=4e1fda57-e9c0-4475-b7d3-eb5e20428679 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.41093069Z" level=info msg="Starting container: b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b" id=b0351ef8-1f0c-4e4f-b96b-5b8071b51b42 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.412764824Z" level=info msg="Started container" PID=1725 containerID=b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b description=kube-system/storage-provisioner/storage-provisioner id=b0351ef8-1f0c-4e4f-b96b-5b8071b51b42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ef42729e937b2ca3e9bba2aedeb2ae00aca5fa5ae7450af6403e2bf88786965
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.958788892Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.964576197Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.964612048Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.964643054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.969937078Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.969965861Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.96998885Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.974072412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.974103292Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.974128665Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.97848855Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.978519004Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b15abaa4621ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   8ef42729e937b       storage-provisioner                          kube-system
	7ef988bd35261       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   8eb1d165be86e       dashboard-metrics-scraper-6ffb444bf9-ffd4b   kubernetes-dashboard
	65c20f2832484       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   013e22a0e95e9       kubernetes-dashboard-855c9754f9-tc684        kubernetes-dashboard
	80113249bfadd       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   5954fbeed6686       busybox                                      default
	7cab4e729fa4f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   4a32336c7ccfb       coredns-66bc5c9577-kw8nl                     kube-system
	7c3a2deb09c8c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           48 seconds ago      Running             kube-proxy                  0                   50869195acbbf       kube-proxy-m2zpr                             kube-system
	d5f15fc411f8e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   bfbcc644b9c76       kindnet-f84xr                                kube-system
	cf27f79cf6600       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   8ef42729e937b       storage-provisioner                          kube-system
	f0c346e2ecb86       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           51 seconds ago      Running             kube-controller-manager     0                   4aaae2e5af9d1       kube-controller-manager-embed-certs-997968   kube-system
	ccbdbea6e31f7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           51 seconds ago      Running             kube-apiserver              0                   7365f9ba1c6a7       kube-apiserver-embed-certs-997968            kube-system
	9567c8724e790       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           51 seconds ago      Running             kube-scheduler              0                   8893d48f24b35       kube-scheduler-embed-certs-997968            kube-system
	aea22bcd770b6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           51 seconds ago      Running             etcd                        0                   3e1285f7168b4       etcd-embed-certs-997968                      kube-system
	
	
	==> coredns [7cab4e729fa4fbf88d02cc827d35d3b458ec55221475e3d84901c71b0aaffabd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41778 - 33262 "HINFO IN 7832111765629505420.4352648602490583271. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057761001s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-997968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-997968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=embed-certs-997968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-997968
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:53:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-997968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                39095a07-7a66-4c4f-9c45-34915880419b
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-kw8nl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-997968                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-f84xr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-997968             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-997968    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-m2zpr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-997968             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ffd4b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tc684         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 114s)  kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-997968 event: Registered Node embed-certs-997968 in Controller
	  Normal  NodeReady                91s                  kubelet          Node embed-certs-997968 status is now: NodeReady
	  Normal  Starting                 52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)    kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                  node-controller  Node embed-certs-997968 event: Registered Node embed-certs-997968 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [aea22bcd770b685f5b36f548f9387928f647a3eb4b9ecbbe8f9c4b71394765c0] <==
	{"level":"warn","ts":"2025-12-06T09:53:15.995225Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.310835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-12-06T09:53:15.995261Z","caller":"traceutil/trace.go:172","msg":"trace[1845936111] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"309.108757ms","start":"2025-12-06T09:53:15.686141Z","end":"2025-12-06T09:53:15.995250Z","steps":["trace[1845936111] 'process raft request'  (duration: 309.060796ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:53:15.995272Z","caller":"traceutil/trace.go:172","msg":"trace[1074145201] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:542; }","duration":"187.369374ms","start":"2025-12-06T09:53:15.807895Z","end":"2025-12-06T09:53:15.995264Z","steps":["trace[1074145201] 'agreement among raft nodes before linearized reading'  (duration: 187.237956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:53:15.995273Z","caller":"traceutil/trace.go:172","msg":"trace[1315797574] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"310.001245ms","start":"2025-12-06T09:53:15.685266Z","end":"2025-12-06T09:53:15.995268Z","steps":["trace[1315797574] 'process raft request'  (duration: 309.841367ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:53:15.995243Z","caller":"traceutil/trace.go:172","msg":"trace[1803967279] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9; range_end:; response_count:1; response_revision:542; }","duration":"310.445017ms","start":"2025-12-06T09:53:15.684787Z","end":"2025-12-06T09:53:15.995232Z","steps":["trace[1803967279] 'agreement among raft nodes before linearized reading'  (duration: 310.259884ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:15.995336Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.685250Z","time spent":"310.056067ms","remote":"127.0.0.1:39720","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":708,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-997968.187e9799313750c3\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-997968.187e9799313750c3\" value_size:630 lease:499225502517217530 >> failure:<>"}
	{"level":"warn","ts":"2025-12-06T09:53:15.995337Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.686131Z","time spent":"309.157654ms","remote":"127.0.0.1:40518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3123,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" mod_revision:537 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" value_size:3041 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" > >"}
	{"level":"warn","ts":"2025-12-06T09:53:15.995362Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.684773Z","time spent":"310.572651ms","remote":"127.0.0.1:40518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":1,"response size":2996,"request content":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:53:16.176313Z","caller":"traceutil/trace.go:172","msg":"trace[1426097075] linearizableReadLoop","detail":"{readStateIndex:575; appliedIndex:575; }","duration":"122.644918ms","start":"2025-12-06T09:53:16.053630Z","end":"2025-12-06T09:53:16.176275Z","steps":["trace[1426097075] 'read index received'  (duration: 122.634725ms)","trace[1426097075] 'applied index is now lower than readState.Index'  (duration: 8.72µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304016Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"307.543592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684\" limit:1 ","response":"range_response_count:1 size:2850"}
	{"level":"info","ts":"2025-12-06T09:53:16.304108Z","caller":"traceutil/trace.go:172","msg":"trace[1519919302] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684; range_end:; response_count:1; response_revision:545; }","duration":"307.630223ms","start":"2025-12-06T09:53:15.996443Z","end":"2025-12-06T09:53:16.304073Z","steps":["trace[1519919302] 'agreement among raft nodes before linearized reading'  (duration: 179.951085ms)","trace[1519919302] 'range keys from in-memory index tree'  (duration: 127.497162ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304031Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"277.810833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kw8nl\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"warn","ts":"2025-12-06T09:53:16.304165Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.996427Z","time spent":"307.726572ms","remote":"127.0.0.1:39946","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":1,"response size":2873,"request content":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:53:16.304185Z","caller":"traceutil/trace.go:172","msg":"trace[1468186240] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-kw8nl; range_end:; response_count:1; response_revision:545; }","duration":"277.970629ms","start":"2025-12-06T09:53:16.026199Z","end":"2025-12-06T09:53:16.304170Z","steps":["trace[1468186240] 'agreement among raft nodes before linearized reading'  (duration: 150.117406ms)","trace[1468186240] 'range keys from in-memory index tree'  (duration: 127.60717ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304236Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.700232ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597539371993396 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" value_size:628 lease:499225502517217530 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:53:16.304369Z","caller":"traceutil/trace.go:172","msg":"trace[1839414587] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"305.827801ms","start":"2025-12-06T09:53:15.998524Z","end":"2025-12-06T09:53:16.304352Z","steps":["trace[1839414587] 'process raft request'  (duration: 177.780029ms)","trace[1839414587] 'compare'  (duration: 127.596242ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:53:16.304425Z","caller":"traceutil/trace.go:172","msg":"trace[658265676] linearizableReadLoop","detail":"{readStateIndex:578; appliedIndex:575; }","duration":"128.05641ms","start":"2025-12-06T09:53:16.176359Z","end":"2025-12-06T09:53:16.304415Z","steps":["trace[658265676] 'read index received'  (duration: 47.98µs)","trace[658265676] 'applied index is now lower than readState.Index'  (duration: 128.007774ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304488Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.998508Z","time spent":"305.888398ms","remote":"127.0.0.1:39720","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" value_size:628 lease:499225502517217530 >> failure:<>"}
	{"level":"info","ts":"2025-12-06T09:53:16.304536Z","caller":"traceutil/trace.go:172","msg":"trace[629104235] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"304.972804ms","start":"2025-12-06T09:53:15.999554Z","end":"2025-12-06T09:53:16.304527Z","steps":["trace[629104235] 'process raft request'  (duration: 304.731479ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:16.304556Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.050666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684\" limit:1 ","response":"range_response_count:1 size:2850"}
	{"level":"info","ts":"2025-12-06T09:53:16.304582Z","caller":"traceutil/trace.go:172","msg":"trace[1814395393] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684; range_end:; response_count:1; response_revision:548; }","duration":"248.080726ms","start":"2025-12-06T09:53:16.056494Z","end":"2025-12-06T09:53:16.304575Z","steps":["trace[1814395393] 'agreement among raft nodes before linearized reading'  (duration: 247.954686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:16.304601Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.999535Z","time spent":"305.030209ms","remote":"127.0.0.1:40518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3003,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" mod_revision:534 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" value_size:2916 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" > >"}
	{"level":"info","ts":"2025-12-06T09:53:16.304604Z","caller":"traceutil/trace.go:172","msg":"trace[900876469] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"304.458329ms","start":"2025-12-06T09:53:16.000136Z","end":"2025-12-06T09:53:16.304594Z","steps":["trace[900876469] 'process raft request'  (duration: 304.230407ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:16.304660Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:16.000124Z","time spent":"304.500922ms","remote":"127.0.0.1:40466","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4918,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:538 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4847 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"info","ts":"2025-12-06T09:53:16.724639Z","caller":"traceutil/trace.go:172","msg":"trace[507108098] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"101.072643ms","start":"2025-12-06T09:53:16.623549Z","end":"2025-12-06T09:53:16.724621Z","steps":["trace[507108098] 'process raft request'  (duration: 38.142461ms)","trace[507108098] 'compare'  (duration: 62.822175ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:54:00 up  2:36,  0 user,  load average: 4.10, 3.34, 3.37
	Linux embed-certs-997968 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5f15fc411f8e34d0fe7d52849aaf1d7a447d0b42b610ca92f5e65f54ca33b72] <==
	I1206 09:53:11.736011       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:53:11.736314       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:53:11.736516       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:53:11.736541       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:53:11.736566       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:53:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:53:11.955650       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:53:11.955696       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:53:11.955709       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:53:11.956027       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 09:53:41.955876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 09:53:41.955888       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 09:53:41.956028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 09:53:41.957146       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1206 09:53:43.556317       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:53:43.556362       1 metrics.go:72] Registering metrics
	I1206 09:53:43.556434       1 controller.go:711] "Syncing nftables rules"
	I1206 09:53:51.958384       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:53:51.958501       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ccbdbea6e31f77d77210cb56e75d243da8b87d3a1bba9fb48502f886fe7cc436] <==
	I1206 09:53:10.940314       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:53:10.940019       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:53:10.940969       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:53:10.940046       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:53:10.940066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:53:10.941656       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:53:10.940572       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:53:10.940918       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:53:10.941976       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:53:10.941985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:53:10.941993       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:53:10.948380       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:53:10.978585       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:53:11.182197       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:53:11.241315       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:53:11.288752       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:53:11.319662       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:53:11.330207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:53:11.377074       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.134.52"}
	I1206 09:53:11.397498       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.17.36"}
	I1206 09:53:11.842860       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:53:14.491788       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:14.663106       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:14.663106       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:14.875155       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f0c346e2ecb8689cc659d92dd982e72bea92df80d9c19d6fe9b36590adae4c5d] <==
	I1206 09:53:14.260126       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:53:14.273400       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:53:14.276757       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:53:14.278501       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:53:14.284091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:53:14.284091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:53:14.284095       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:53:14.284096       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:53:14.285496       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:53:14.294899       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:53:14.294939       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:53:14.295005       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:53:14.295155       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:53:14.296276       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:53:14.296375       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:53:14.296388       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:53:14.296413       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:53:14.298664       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:53:14.300928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:14.314086       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:14.314228       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:53:14.322340       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:53:14.324568       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:53:14.326813       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:53:14.331050       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [7c3a2deb09c8c337db0be2cf134ecc3f8dc26a79db21ff5911915b272f23ebec] <==
	I1206 09:53:11.598804       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:53:11.662654       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:53:11.763629       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:53:11.763699       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 09:53:11.763789       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:53:11.782450       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:53:11.782548       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:53:11.788206       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:53:11.788774       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:53:11.788813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:11.791452       1 config.go:200] "Starting service config controller"
	I1206 09:53:11.791502       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:53:11.791655       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:53:11.791689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:53:11.791756       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:53:11.791764       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:53:11.791782       1 config.go:309] "Starting node config controller"
	I1206 09:53:11.791787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:53:11.791793       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:53:11.892512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:53:11.892512       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:53:11.892543       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9567c8724e7902114f90b0bfd9aeaba8475dd4c7fdffc2b71b9794b8d2429d02] <==
	I1206 09:53:09.643160       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:53:10.902061       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:53:10.902100       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:10.907730       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 09:53:10.907770       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1206 09:53:10.907838       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:10.907909       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:10.907888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:53:10.908016       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:53:10.908200       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:53:10.908300       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:53:11.008689       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1206 09:53:11.008736       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:53:11.008752       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:53:11 embed-certs-997968 kubelet[730]: E1206 09:53:11.266880     730 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-997968\" already exists" pod="kube-system/etcd-embed-certs-997968"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.704973     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f914f59a-882f-4ac6-babd-0ef19a2aed75-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ffd4b\" (UID: \"f914f59a-882f-4ac6-babd-0ef19a2aed75\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.705032     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztf7n\" (UniqueName: \"kubernetes.io/projected/f914f59a-882f-4ac6-babd-0ef19a2aed75-kube-api-access-ztf7n\") pod \"dashboard-metrics-scraper-6ffb444bf9-ffd4b\" (UID: \"f914f59a-882f-4ac6-babd-0ef19a2aed75\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.805486     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v962\" (UniqueName: \"kubernetes.io/projected/48554eb1-e975-4229-8ee7-2e6aeb6ed273-kube-api-access-2v962\") pod \"kubernetes-dashboard-855c9754f9-tc684\" (UID: \"48554eb1-e975-4229-8ee7-2e6aeb6ed273\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.805595     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/48554eb1-e975-4229-8ee7-2e6aeb6ed273-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-tc684\" (UID: \"48554eb1-e975-4229-8ee7-2e6aeb6ed273\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684"
	Dec 06 09:53:19 embed-certs-997968 kubelet[730]: I1206 09:53:19.289964     730 scope.go:117] "RemoveContainer" containerID="84493d6c463fefaf379ace03cd2f4e02cfbf27abe23fb1b594485606e268f6fb"
	Dec 06 09:53:20 embed-certs-997968 kubelet[730]: I1206 09:53:20.294714     730 scope.go:117] "RemoveContainer" containerID="84493d6c463fefaf379ace03cd2f4e02cfbf27abe23fb1b594485606e268f6fb"
	Dec 06 09:53:20 embed-certs-997968 kubelet[730]: I1206 09:53:20.294854     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:20 embed-certs-997968 kubelet[730]: E1206 09:53:20.295054     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:21 embed-certs-997968 kubelet[730]: I1206 09:53:21.299368     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:21 embed-certs-997968 kubelet[730]: E1206 09:53:21.299611     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:23 embed-certs-997968 kubelet[730]: I1206 09:53:23.316695     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684" podStartSLOduration=1.782078582 podStartE2EDuration="8.316669259s" podCreationTimestamp="2025-12-06 09:53:15 +0000 UTC" firstStartedPulling="2025-12-06 09:53:16.545446722 +0000 UTC m=+8.473091879" lastFinishedPulling="2025-12-06 09:53:23.080037395 +0000 UTC m=+15.007682556" observedRunningTime="2025-12-06 09:53:23.316474957 +0000 UTC m=+15.244120137" watchObservedRunningTime="2025-12-06 09:53:23.316669259 +0000 UTC m=+15.244314438"
	Dec 06 09:53:26 embed-certs-997968 kubelet[730]: I1206 09:53:26.571220     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:26 embed-certs-997968 kubelet[730]: E1206 09:53:26.571428     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: I1206 09:53:41.207550     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: I1206 09:53:41.359413     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: I1206 09:53:41.359678     730 scope.go:117] "RemoveContainer" containerID="7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: E1206 09:53:41.359906     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:42 embed-certs-997968 kubelet[730]: I1206 09:53:42.363658     730 scope.go:117] "RemoveContainer" containerID="cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541"
	Dec 06 09:53:46 embed-certs-997968 kubelet[730]: I1206 09:53:46.571546     730 scope.go:117] "RemoveContainer" containerID="7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	Dec 06 09:53:46 embed-certs-997968 kubelet[730]: E1206 09:53:46.571762     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: kubelet.service: Consumed 1.681s CPU time.
	
	
	==> kubernetes-dashboard [65c20f28324841a573607a02aa9b5804867835a7e2ec696ee719ec51845d6c3f] <==
	2025/12/06 09:53:23 Starting overwatch
	2025/12/06 09:53:23 Using namespace: kubernetes-dashboard
	2025/12/06 09:53:23 Using in-cluster config to connect to apiserver
	2025/12/06 09:53:23 Using secret token for csrf signing
	2025/12/06 09:53:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:53:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:53:23 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:53:23 Generating JWE encryption key
	2025/12/06 09:53:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:53:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:53:23 Initializing JWE encryption key from synchronized object
	2025/12/06 09:53:23 Creating in-cluster Sidecar client
	2025/12/06 09:53:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:53:23 Serving insecurely on HTTP port: 9090
	2025/12/06 09:53:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b] <==
	I1206 09:53:42.425779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:53:42.433354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:53:42.433388       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:53:42.435520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:45.890598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:50.150862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:53.748519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:56.810769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:59.834089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:59.840219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:59.840423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:53:59.840657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-997968_daee1b08-57e6-4e94-8c39-947a0612956c!
	I1206 09:53:59.840719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a3232d1c-1b95-4b7b-ae4c-725079989772", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-997968_daee1b08-57e6-4e94-8c39-947a0612956c became leader
	W1206 09:53:59.843801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:59.848120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:59.941572       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-997968_daee1b08-57e6-4e94-8c39-947a0612956c!
	
	
	==> storage-provisioner [cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541] <==
	I1206 09:53:11.563742       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:53:41.566944       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997968 -n embed-certs-997968
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997968 -n embed-certs-997968: exit status 2 (476.769514ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-997968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-997968
helpers_test.go:243: (dbg) docker inspect embed-certs-997968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062",
	        "Created": "2025-12-06T09:51:52.675095642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 792788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:53:01.737101155Z",
	            "FinishedAt": "2025-12-06T09:53:00.68511387Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/hosts",
	        "LogPath": "/var/lib/docker/containers/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062/0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062-json.log",
	        "Name": "/embed-certs-997968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-997968:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-997968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e3f6d38a91635ea3d4e8b4e8414647f9bfa446249fd659825daedae64836062",
	                "LowerDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd-init/diff:/var/lib/docker/overlay2/b1d051343d3724882eb0db225f208bd98a623617ce3d858d48f5782873b2b61c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/895134fe8a675c5f118e21edbfec4adb761d1a31db2f1aa1177b2b163d4b4bdd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-997968",
	                "Source": "/var/lib/docker/volumes/embed-certs-997968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-997968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-997968",
	                "name.minikube.sigs.k8s.io": "embed-certs-997968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bae149f05228d50f6c0260de71549fc3f67b5b8013514395d5d5c5600e764ea3",
	            "SandboxKey": "/var/run/docker/netns/bae149f05228",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33230"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33228"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33229"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-997968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d9447c39c3ca701200d25c23e931e64eec9678dd597d8d4ca10d4b524dddd69",
	                    "EndpointID": "698bcdd3ab588f010bdf59409e90c564460380aa1b2d102265a23234966e41bc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "fa:0c:80:e7:9d:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-997968",
	                        "0e3f6d38a916"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968: exit status 2 (365.509994ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-997968 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-997968 logs -n 25: (1.246158621s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p newest-cni-641599 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ stop    │ -p default-k8s-diff-port-759696 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-997968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-997968 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ newest-cni-641599 image list --format=json                                                                                                                                                                                                           │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p newest-cni-641599 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ start   │ -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p newest-cni-641599                                                                                                                                                                                                                                 │ newest-cni-641599            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-983381                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ no-preload-521770 image list --format=json                                                                                                                                                                                                           │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p no-preload-521770 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p no-preload-521770                                                                                                                                                                                                                                 │ no-preload-521770            │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p kindnet-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-983381               │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ default-k8s-diff-port-759696 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p default-k8s-diff-port-759696 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-759696 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	│ image   │ embed-certs-997968 image list --format=json                                                                                                                                                                                                          │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-997968 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-997968           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:53:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:53:35.736114  802078 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:53:35.736358  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736366  802078 out.go:374] Setting ErrFile to fd 2...
	I1206 09:53:35.736370  802078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:53:35.736608  802078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:53:35.737088  802078 out.go:368] Setting JSON to false
	I1206 09:53:35.738323  802078 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9360,"bootTime":1765005456,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:53:35.738388  802078 start.go:143] virtualization: kvm guest
	I1206 09:53:35.740317  802078 out.go:179] * [kindnet-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:53:35.741422  802078 notify.go:221] Checking for updates...
	I1206 09:53:35.741506  802078 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:53:35.742495  802078 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:53:35.743616  802078 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:35.744630  802078 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:53:35.745749  802078 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:53:35.746924  802078 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:53:35.748304  802078 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748393  802078 config.go:182] Loaded profile config "default-k8s-diff-port-759696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748491  802078 config.go:182] Loaded profile config "embed-certs-997968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:35.748589  802078 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:53:35.772982  802078 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:53:35.773088  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.830680  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.820532325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.830809  802078 docker.go:319] overlay module found
	I1206 09:53:35.832481  802078 out.go:179] * Using the docker driver based on user configuration
	I1206 09:53:35.833543  802078 start.go:309] selected driver: docker
	I1206 09:53:35.833558  802078 start.go:927] validating driver "docker" against <nil>
	I1206 09:53:35.833571  802078 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:53:35.834109  802078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:53:35.894209  802078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:53:35.883098075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:53:35.894359  802078 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:53:35.894710  802078 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:53:35.896340  802078 out.go:179] * Using Docker driver with root privileges
	I1206 09:53:35.897286  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:35.897302  802078 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:53:35.897380  802078 start.go:353] cluster config:
	{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:35.898495  802078 out.go:179] * Starting "kindnet-983381" primary control-plane node in "kindnet-983381" cluster
	I1206 09:53:35.899494  802078 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:53:35.900543  802078 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:53:35.901765  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:35.901802  802078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:53:35.901815  802078 cache.go:65] Caching tarball of preloaded images
	I1206 09:53:35.901854  802078 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:53:35.901908  802078 preload.go:238] Found /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:53:35.901922  802078 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:53:35.902031  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:35.902059  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json: {Name:mk3a79de74bde68ec31b151eacb622c73b38daf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:35.924146  802078 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:53:35.924170  802078 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:53:35.924185  802078 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:53:35.924214  802078 start.go:360] acquireMachinesLock for kindnet-983381: {Name:mk6e4785105686f4f72d41f8081d2646bcdec596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:53:35.924309  802078 start.go:364] duration metric: took 76.057µs to acquireMachinesLock for "kindnet-983381"
	I1206 09:53:35.924331  802078 start.go:93] Provisioning new machine with config: &{Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:35.924423  802078 start.go:125] createHost starting for "" (driver="docker")
	W1206 09:53:33.030174  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:35.530789  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:35.464753  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:35.964668  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.464301  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:36.964782  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.464682  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:37.965001  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.464290  796626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:38.545704  796626 kubeadm.go:1114] duration metric: took 4.655385618s to wait for elevateKubeSystemPrivileges
	I1206 09:53:38.545746  796626 kubeadm.go:403] duration metric: took 16.212898927s to StartCluster
	I1206 09:53:38.545772  796626 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.545859  796626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:53:38.548341  796626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:38.548970  796626 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:53:38.548998  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:53:38.549213  796626 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:38.549132  796626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:53:38.549388  796626 addons.go:70] Setting storage-provisioner=true in profile "auto-983381"
	I1206 09:53:38.549417  796626 addons.go:239] Setting addon storage-provisioner=true in "auto-983381"
	I1206 09:53:38.549483  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.550037  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.549392  796626 addons.go:70] Setting default-storageclass=true in profile "auto-983381"
	I1206 09:53:38.550507  796626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-983381"
	I1206 09:53:38.550833  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.550926  796626 out.go:179] * Verifying Kubernetes components...
	I1206 09:53:38.553410  796626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:38.588099  796626 addons.go:239] Setting addon default-storageclass=true in "auto-983381"
	I1206 09:53:38.588150  796626 host.go:66] Checking if "auto-983381" exists ...
	I1206 09:53:38.588574  796626 cli_runner.go:164] Run: docker container inspect auto-983381 --format={{.State.Status}}
	I1206 09:53:38.617638  796626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.617665  796626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:53:38.617866  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.626643  796626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1206 09:53:35.198662  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:37.200822  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:38.638810  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.668107  796626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:38.668132  796626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:53:38.668195  796626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-983381
	I1206 09:53:38.675658  796626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:53:38.692832  796626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33231 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/auto-983381/id_rsa Username:docker}
	I1206 09:53:38.702368  796626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:38.748795  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:53:38.801277  796626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:53:39.027533  796626 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:53:39.029182  796626 node_ready.go:35] waiting up to 15m0s for node "auto-983381" to be "Ready" ...
	I1206 09:53:39.654289  796626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-983381" context rescaled to 1 replicas
	I1206 09:53:40.345178  796626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.543850325s)
	I1206 09:53:40.346941  796626 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1206 09:53:35.925937  802078 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:53:35.926209  802078 start.go:159] libmachine.API.Create for "kindnet-983381" (driver="docker")
	I1206 09:53:35.926252  802078 client.go:173] LocalClient.Create starting
	I1206 09:53:35.926340  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem
	I1206 09:53:35.926378  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926405  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926506  802078 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem
	I1206 09:53:35.926535  802078 main.go:143] libmachine: Decoding PEM data...
	I1206 09:53:35.926552  802078 main.go:143] libmachine: Parsing certificate...
	I1206 09:53:35.926986  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:53:35.943608  802078 cli_runner.go:211] docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:53:35.943723  802078 network_create.go:284] running [docker network inspect kindnet-983381] to gather additional debugging logs...
	I1206 09:53:35.943748  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381
	W1206 09:53:35.960448  802078 cli_runner.go:211] docker network inspect kindnet-983381 returned with exit code 1
	I1206 09:53:35.960495  802078 network_create.go:287] error running [docker network inspect kindnet-983381]: docker network inspect kindnet-983381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-983381 not found
	I1206 09:53:35.960514  802078 network_create.go:289] output of [docker network inspect kindnet-983381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-983381 not found
	
	** /stderr **
	I1206 09:53:35.960636  802078 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:35.980401  802078 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
	I1206 09:53:35.981149  802078 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d017f67e7a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:3d:88:f2:36:d5} reservation:<nil>}
	I1206 09:53:35.981925  802078 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-896d7bd66742 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:f2:60:db:24:87} reservation:<nil>}
	I1206 09:53:35.982560  802078 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fadb45f2248d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:97:af:e5:cc:0b} reservation:<nil>}
	I1206 09:53:35.983088  802078 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5d9447c39c3c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e2:61:5e:c6:7b:21} reservation:<nil>}
	I1206 09:53:35.983881  802078 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e6b5e0}
	I1206 09:53:35.983907  802078 network_create.go:124] attempt to create docker network kindnet-983381 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:53:35.983952  802078 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-983381 kindnet-983381
	I1206 09:53:36.037591  802078 network_create.go:108] docker network kindnet-983381 192.168.94.0/24 created
	I1206 09:53:36.037621  802078 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-983381" container
	I1206 09:53:36.037678  802078 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:53:36.055484  802078 cli_runner.go:164] Run: docker volume create kindnet-983381 --label name.minikube.sigs.k8s.io=kindnet-983381 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:53:36.074528  802078 oci.go:103] Successfully created a docker volume kindnet-983381
	I1206 09:53:36.074605  802078 cli_runner.go:164] Run: docker run --rm --name kindnet-983381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --entrypoint /usr/bin/test -v kindnet-983381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:53:36.495904  802078 oci.go:107] Successfully prepared a docker volume kindnet-983381
	I1206 09:53:36.495988  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:36.496004  802078 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:53:36.496085  802078 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:53:40.493659  802078 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-983381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.997500219s)
	I1206 09:53:40.493705  802078 kic.go:203] duration metric: took 3.997696888s to extract preloaded images to volume ...
	W1206 09:53:40.493857  802078 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:53:40.493908  802078 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:53:40.493960  802078 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:53:40.553379  802078 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-983381 --name kindnet-983381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-983381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-983381 --network kindnet-983381 --ip 192.168.94.2 --volume kindnet-983381:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	W1206 09:53:37.530880  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	W1206 09:53:39.530936  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:40.844704  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Running}}
	I1206 09:53:40.865257  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:40.884729  802078 cli_runner.go:164] Run: docker exec kindnet-983381 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:53:40.933934  802078 oci.go:144] the created container "kindnet-983381" has a running status.
	I1206 09:53:40.933992  802078 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa...
	I1206 09:53:41.065963  802078 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:53:41.097932  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.118699  802078 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:53:41.118719  802078 kic_runner.go:114] Args: [docker exec --privileged kindnet-983381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:53:41.177800  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:53:41.202566  802078 machine.go:94] provisionDockerMachine start ...
	I1206 09:53:41.202682  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.226294  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.226976  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.227014  802078 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:53:41.366826  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.366854  802078 ubuntu.go:182] provisioning hostname "kindnet-983381"
	I1206 09:53:41.366930  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.388560  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.388853  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.388868  802078 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-983381 && echo "kindnet-983381" | sudo tee /etc/hostname
	I1206 09:53:41.533194  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-983381
	
	I1206 09:53:41.533282  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.553319  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:41.553612  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:41.553649  802078 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-983381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-983381/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-983381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:53:41.687391  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:53:41.687422  802078 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-499330/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-499330/.minikube}
	I1206 09:53:41.687496  802078 ubuntu.go:190] setting up certificates
	I1206 09:53:41.687511  802078 provision.go:84] configureAuth start
	I1206 09:53:41.687570  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:41.706986  802078 provision.go:143] copyHostCerts
	I1206 09:53:41.707057  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem, removing ...
	I1206 09:53:41.707070  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem
	I1206 09:53:41.707141  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/key.pem (1675 bytes)
	I1206 09:53:41.707232  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem, removing ...
	I1206 09:53:41.707242  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem
	I1206 09:53:41.707269  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/ca.pem (1082 bytes)
	I1206 09:53:41.707336  802078 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem, removing ...
	I1206 09:53:41.707343  802078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem
	I1206 09:53:41.707366  802078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-499330/.minikube/cert.pem (1123 bytes)
	I1206 09:53:41.707413  802078 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem org=jenkins.kindnet-983381 san=[127.0.0.1 192.168.94.2 kindnet-983381 localhost minikube]
	I1206 09:53:41.806395  802078 provision.go:177] copyRemoteCerts
	I1206 09:53:41.806477  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:53:41.806526  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:41.825939  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:41.922925  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1206 09:53:41.943043  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:53:41.962026  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:53:41.980803  802078 provision.go:87] duration metric: took 293.274301ms to configureAuth
	I1206 09:53:41.980839  802078 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:53:41.981030  802078 config.go:182] Loaded profile config "kindnet-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:53:41.981180  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.001023  802078 main.go:143] libmachine: Using SSH client type: native
	I1206 09:53:42.001294  802078 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33236 <nil> <nil>}
	I1206 09:53:42.001312  802078 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:53:42.284104  802078 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:53:42.284126  802078 machine.go:97] duration metric: took 1.081535088s to provisionDockerMachine
	I1206 09:53:42.284136  802078 client.go:176] duration metric: took 6.35787804s to LocalClient.Create
	I1206 09:53:42.284158  802078 start.go:167] duration metric: took 6.357949811s to libmachine.API.Create "kindnet-983381"
	I1206 09:53:42.284171  802078 start.go:293] postStartSetup for "kindnet-983381" (driver="docker")
	I1206 09:53:42.284188  802078 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:53:42.284255  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:53:42.284310  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.302172  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.400807  802078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:53:42.404744  802078 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:53:42.404778  802078 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:53:42.404792  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/addons for local assets ...
	I1206 09:53:42.404846  802078 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-499330/.minikube/files for local assets ...
	I1206 09:53:42.404962  802078 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem -> 5028672.pem in /etc/ssl/certs
	I1206 09:53:42.405098  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:53:42.414846  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:42.437157  802078 start.go:296] duration metric: took 152.966336ms for postStartSetup
	I1206 09:53:42.437535  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.455950  802078 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/config.json ...
	I1206 09:53:42.456172  802078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:53:42.456212  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.474118  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.568913  802078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:53:42.573671  802078 start.go:128] duration metric: took 6.649231212s to createHost
	I1206 09:53:42.573696  802078 start.go:83] releasing machines lock for "kindnet-983381", held for 6.649375377s
	I1206 09:53:42.573776  802078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-983381
	I1206 09:53:42.593508  802078 ssh_runner.go:195] Run: cat /version.json
	I1206 09:53:42.593569  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.593516  802078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:53:42.593700  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:53:42.611419  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.612544  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:53:42.760558  802078 ssh_runner.go:195] Run: systemctl --version
	I1206 09:53:42.767177  802078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:53:42.803261  802078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:53:42.807859  802078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:53:42.807927  802078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:53:42.833482  802078 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:53:42.833508  802078 start.go:496] detecting cgroup driver to use...
	I1206 09:53:42.833546  802078 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:53:42.833599  802078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:53:42.849782  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:53:42.862043  802078 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:53:42.862089  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:53:42.879135  802078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:53:42.898925  802078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:53:42.987951  802078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:53:43.078627  802078 docker.go:234] disabling docker service ...
	I1206 09:53:43.078699  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:53:43.100368  802078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:53:43.113370  802078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:53:43.201176  802078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:53:43.294081  802078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:53:43.307631  802078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:53:43.321801  802078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:53:43.321856  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.331480  802078 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:53:43.331547  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.340367  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.349027  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.357421  802078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:53:43.365342  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.375704  802078 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.390512  802078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:53:43.399512  802078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:53:43.406601  802078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:53:43.413755  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:43.497188  802078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:53:43.639801  802078 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:53:43.639882  802078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:53:43.644037  802078 start.go:564] Will wait 60s for crictl version
	I1206 09:53:43.644085  802078 ssh_runner.go:195] Run: which crictl
	I1206 09:53:43.647878  802078 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:53:43.673703  802078 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:53:43.673775  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.703877  802078 ssh_runner.go:195] Run: crio --version
	I1206 09:53:43.733325  802078 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:53:39.697901  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	W1206 09:53:41.698088  789560 pod_ready.go:104] pod "coredns-66bc5c9577-gpnjq" is not "Ready", error: <nil>
	I1206 09:53:42.698368  789560 pod_ready.go:94] pod "coredns-66bc5c9577-gpnjq" is "Ready"
	I1206 09:53:42.698400  789560 pod_ready.go:86] duration metric: took 37.505994586s for pod "coredns-66bc5c9577-gpnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.700901  789560 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.705139  789560 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.705165  789560 pod_ready.go:86] duration metric: took 4.236162ms for pod "etcd-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.707252  789560 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.711008  789560 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.711028  789560 pod_ready.go:86] duration metric: took 3.752374ms for pod "kube-apiserver-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.713026  789560 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:42.897101  789560 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:42.897138  789560 pod_ready.go:86] duration metric: took 184.092641ms for pod "kube-controller-manager-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.097595  789560 pod_ready.go:83] waiting for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.497251  789560 pod_ready.go:94] pod "kube-proxy-jstq5" is "Ready"
	I1206 09:53:43.497282  789560 pod_ready.go:86] duration metric: took 399.656581ms for pod "kube-proxy-jstq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.697290  789560 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096580  789560 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759696" is "Ready"
	I1206 09:53:44.096611  789560 pod_ready.go:86] duration metric: took 399.289382ms for pod "kube-scheduler-default-k8s-diff-port-759696" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.096627  789560 pod_ready.go:40] duration metric: took 38.907883012s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.141173  789560 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.143056  789560 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759696" cluster and "default" namespace by default
	W1206 09:53:42.029753  792441 pod_ready.go:104] pod "coredns-66bc5c9577-kw8nl" is not "Ready", error: <nil>
	I1206 09:53:43.530143  792441 pod_ready.go:94] pod "coredns-66bc5c9577-kw8nl" is "Ready"
	I1206 09:53:43.530177  792441 pod_ready.go:86] duration metric: took 31.006235572s for pod "coredns-66bc5c9577-kw8nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.532504  792441 pod_ready.go:83] waiting for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.539381  792441 pod_ready.go:94] pod "etcd-embed-certs-997968" is "Ready"
	I1206 09:53:43.539408  792441 pod_ready.go:86] duration metric: took 6.868509ms for pod "etcd-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.541690  792441 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.545546  792441 pod_ready.go:94] pod "kube-apiserver-embed-certs-997968" is "Ready"
	I1206 09:53:43.545571  792441 pod_ready.go:86] duration metric: took 3.85484ms for pod "kube-apiserver-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.547358  792441 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.728143  792441 pod_ready.go:94] pod "kube-controller-manager-embed-certs-997968" is "Ready"
	I1206 09:53:43.728172  792441 pod_ready.go:86] duration metric: took 180.793456ms for pod "kube-controller-manager-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:43.928272  792441 pod_ready.go:83] waiting for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.328082  792441 pod_ready.go:94] pod "kube-proxy-m2zpr" is "Ready"
	I1206 09:53:44.328117  792441 pod_ready.go:86] duration metric: took 399.817969ms for pod "kube-proxy-m2zpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.528776  792441 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927733  792441 pod_ready.go:94] pod "kube-scheduler-embed-certs-997968" is "Ready"
	I1206 09:53:44.927763  792441 pod_ready.go:86] duration metric: took 398.958608ms for pod "kube-scheduler-embed-certs-997968" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:53:44.927778  792441 pod_ready.go:40] duration metric: took 32.40863001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:53:44.980591  792441 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:53:44.982680  792441 out.go:179] * Done! kubectl is now configured to use "embed-certs-997968" cluster and "default" namespace by default
	I1206 09:53:43.734370  802078 cli_runner.go:164] Run: docker network inspect kindnet-983381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:53:43.751497  802078 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:53:43.755659  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.765989  802078 kubeadm.go:884] updating cluster {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:53:43.766104  802078 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:53:43.766146  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.799525  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.799546  802078 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:53:43.799590  802078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:53:43.825735  802078 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:53:43.825758  802078 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:53:43.825766  802078 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1206 09:53:43.825861  802078 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-983381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1206 09:53:43.825926  802078 ssh_runner.go:195] Run: crio config
	I1206 09:53:43.872261  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:43.872292  802078 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:53:43.872313  802078 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-983381 NodeName:kindnet-983381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:53:43.872443  802078 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-983381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:53:43.872538  802078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:53:43.881153  802078 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:53:43.881224  802078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:53:43.889391  802078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1206 09:53:43.903305  802078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:53:43.918720  802078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 09:53:43.931394  802078 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:53:43.935089  802078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:53:43.944888  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:53:44.030422  802078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:53:44.054751  802078 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381 for IP: 192.168.94.2
	I1206 09:53:44.054774  802078 certs.go:195] generating shared ca certs ...
	I1206 09:53:44.054796  802078 certs.go:227] acquiring lock for ca certs: {Name:mkb016cbabf24a3b95bea5c4dcabd8b5087558c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.054979  802078 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key
	I1206 09:53:44.055055  802078 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key
	I1206 09:53:44.055074  802078 certs.go:257] generating profile certs ...
	I1206 09:53:44.055148  802078 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key
	I1206 09:53:44.055166  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt with IP's: []
	I1206 09:53:44.179136  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt ...
	I1206 09:53:44.179163  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.crt: {Name:mkbed0739e68db5951cd1670ef77a82b17aedb26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179330  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key ...
	I1206 09:53:44.179342  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/client.key: {Name:mk3e2c0a04a2e3e8f578932802d27c8b90d53860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.179422  802078 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3
	I1206 09:53:44.179436  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:53:44.342441  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 ...
	I1206 09:53:44.342476  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3: {Name:mk0af4503346333895c5c579d4fb2a8c9dcfdcee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342649  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 ...
	I1206 09:53:44.342667  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3: {Name:mk0a4a58e7f8845d02778448b3e5355101c2e3fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.342770  802078 certs.go:382] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt
	I1206 09:53:44.342868  802078 certs.go:386] copying /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key.67cc56e3 -> /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key
	I1206 09:53:44.342951  802078 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key
	I1206 09:53:44.342972  802078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt with IP's: []
	I1206 09:53:44.520746  802078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt ...
	I1206 09:53:44.520773  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt: {Name:mk958624794dd2556a8291c9921b454b157f3c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.520946  802078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key ...
	I1206 09:53:44.520964  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key: {Name:mk18cc910abc009e83545d8f4f4f90e12f1bb752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:53:44.521164  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem (1338 bytes)
	W1206 09:53:44.521215  802078 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867_empty.pem, impossibly tiny 0 bytes
	I1206 09:53:44.521249  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:53:44.521292  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:53:44.521333  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:53:44.521369  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/certs/key.pem (1675 bytes)
	I1206 09:53:44.521446  802078 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem (1708 bytes)
	I1206 09:53:44.522095  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:53:44.543389  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:53:44.561934  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:53:44.580049  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:53:44.597598  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:53:44.616278  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:53:44.633517  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:53:44.650699  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kindnet-983381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:53:44.668505  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/ssl/certs/5028672.pem --> /usr/share/ca-certificates/5028672.pem (1708 bytes)
	I1206 09:53:44.687149  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:53:44.704280  802078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-499330/.minikube/certs/502867.pem --> /usr/share/ca-certificates/502867.pem (1338 bytes)
	I1206 09:53:44.721767  802078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:53:44.734395  802078 ssh_runner.go:195] Run: openssl version
	I1206 09:53:44.740626  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.747691  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5028672.pem /etc/ssl/certs/5028672.pem
	I1206 09:53:44.755054  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758535  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:21 /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.758584  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5028672.pem
	I1206 09:53:44.794842  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.803151  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5028672.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:53:44.811376  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.819662  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:53:44.827450  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831929  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.831984  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:53:44.868641  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:53:44.877016  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:53:44.884481  802078 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.892077  802078 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/502867.pem /etc/ssl/certs/502867.pem
	I1206 09:53:44.899367  802078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903291  802078 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:21 /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.903338  802078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502867.pem
	I1206 09:53:44.949126  802078 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:53:44.957677  802078 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/502867.pem /etc/ssl/certs/51391683.0
	I1206 09:53:44.966120  802078 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:53:44.970308  802078 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:53:44.970374  802078 kubeadm.go:401] StartCluster: {Name:kindnet-983381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-983381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:53:44.970477  802078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:53:44.970560  802078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:53:45.001343  802078 cri.go:89] found id: ""
	I1206 09:53:45.001414  802078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:53:45.012421  802078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:53:45.021194  802078 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:53:45.021261  802078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:53:45.029115  802078 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:53:45.029134  802078 kubeadm.go:158] found existing configuration files:
	
	I1206 09:53:45.029169  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:53:45.037815  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:53:45.037872  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:53:45.045435  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:53:45.053955  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:53:45.054012  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:53:45.062303  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.070593  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:53:45.070643  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:53:45.079208  802078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:53:45.088134  802078 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:53:45.088189  802078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:53:45.095979  802078 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:53:45.138079  802078 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:53:45.138199  802078 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:53:45.160175  802078 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:53:45.160254  802078 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:53:45.160285  802078 kubeadm.go:319] OS: Linux
	I1206 09:53:45.160352  802078 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:53:45.160443  802078 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:53:45.160554  802078 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:53:45.160647  802078 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:53:45.160734  802078 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:53:45.160812  802078 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:53:45.160892  802078 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:53:45.160962  802078 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:53:45.221322  802078 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:53:45.221523  802078 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:53:45.221688  802078 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:53:45.229074  802078 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:53:40.367608  796626 addons.go:530] duration metric: took 1.818454225s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1206 09:53:41.032946  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:43.533007  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:45.230909  802078 out.go:252]   - Generating certificates and keys ...
	I1206 09:53:45.231011  802078 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:53:45.231124  802078 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:53:45.410620  802078 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:53:45.930986  802078 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:53:46.263989  802078 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:53:46.476019  802078 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:53:46.655346  802078 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:53:46.655593  802078 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.754725  802078 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:53:46.754894  802078 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-983381 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:53:46.832327  802078 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:53:46.992545  802078 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:53:47.179111  802078 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:53:47.179231  802078 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:53:47.446389  802078 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:53:47.805253  802078 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:53:48.039364  802078 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:53:48.570846  802078 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:53:48.856028  802078 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:53:48.856598  802078 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:53:48.860303  802078 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1206 09:53:46.032859  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:48.532015  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:48.862377  802078 out.go:252]   - Booting up control plane ...
	I1206 09:53:48.862492  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:53:48.862569  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:53:48.862631  802078 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:53:48.876239  802078 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:53:48.876432  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:53:48.883203  802078 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:53:48.883360  802078 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:53:48.883405  802078 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:53:48.990510  802078 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:53:48.990684  802078 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:53:50.991982  802078 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001590138s
	I1206 09:53:50.996336  802078 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:53:50.996527  802078 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:53:50.996665  802078 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:53:50.996797  802078 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:53:52.001314  802078 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004895696s
	I1206 09:53:52.920797  802078 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.924413306s
	I1206 09:53:54.497811  802078 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501431632s
	I1206 09:53:54.513719  802078 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:53:54.523571  802078 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:53:54.531849  802078 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:53:54.532153  802078 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-983381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:53:54.540058  802078 kubeadm.go:319] [bootstrap-token] Using token: prjydb.psh7t9q7oigrozcv
	W1206 09:53:51.032320  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:53.032413  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:54.541278  802078 out.go:252]   - Configuring RBAC rules ...
	I1206 09:53:54.541415  802078 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:53:54.544151  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:53:54.548808  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:53:54.551043  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:53:54.553366  802078 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:53:54.556175  802078 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:53:54.904103  802078 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:53:55.318278  802078 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:53:55.904952  802078 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:53:55.906231  802078 kubeadm.go:319] 
	I1206 09:53:55.906359  802078 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:53:55.906379  802078 kubeadm.go:319] 
	I1206 09:53:55.906487  802078 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:53:55.906517  802078 kubeadm.go:319] 
	I1206 09:53:55.906565  802078 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:53:55.906639  802078 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:53:55.906715  802078 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:53:55.906721  802078 kubeadm.go:319] 
	I1206 09:53:55.906789  802078 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:53:55.906795  802078 kubeadm.go:319] 
	I1206 09:53:55.906852  802078 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:53:55.906863  802078 kubeadm.go:319] 
	I1206 09:53:55.906921  802078 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:53:55.907032  802078 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:53:55.907129  802078 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:53:55.907138  802078 kubeadm.go:319] 
	I1206 09:53:55.907270  802078 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:53:55.907380  802078 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:53:55.907390  802078 kubeadm.go:319] 
	I1206 09:53:55.907524  802078 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.907678  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 \
	I1206 09:53:55.907711  802078 kubeadm.go:319] 	--control-plane 
	I1206 09:53:55.907722  802078 kubeadm.go:319] 
	I1206 09:53:55.907839  802078 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:53:55.907848  802078 kubeadm.go:319] 
	I1206 09:53:55.907970  802078 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token prjydb.psh7t9q7oigrozcv \
	I1206 09:53:55.908121  802078 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac1814160973937286e7b30114340d3bc7fa066bce6a763cf6b09fc451584a44 
	I1206 09:53:55.911115  802078 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:53:55.911281  802078 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:53:55.911328  802078 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:53:55.912757  802078 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1206 09:53:55.532658  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	W1206 09:53:58.033512  796626 node_ready.go:57] node "auto-983381" has "Ready":"False" status (will retry)
	I1206 09:53:55.913763  802078 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:53:55.918934  802078 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:53:55.918954  802078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:53:55.936286  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:53:56.167434  802078 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:53:56.167545  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:56.167602  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-983381 minikube.k8s.io/updated_at=2025_12_06T09_53_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=kindnet-983381 minikube.k8s.io/primary=true
	I1206 09:53:56.252647  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:56.252662  802078 ops.go:34] apiserver oom_adj: -16
	I1206 09:53:56.753658  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:57.253633  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:57.753414  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:58.253172  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:58.753293  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:59.253685  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:53:59.753398  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:54:00.253255  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:54:00.753732  802078 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:54:00.848906  802078 kubeadm.go:1114] duration metric: took 4.681414986s to wait for elevateKubeSystemPrivileges
	I1206 09:54:00.849070  802078 kubeadm.go:403] duration metric: took 15.878699162s to StartCluster
	I1206 09:54:00.849106  802078 settings.go:142] acquiring lock: {Name:mk4b083306953afa835d7cf3bbb426aabed51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:54:00.849169  802078 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:54:00.851977  802078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/kubeconfig: {Name:mk338752ef620ad3d54b93aaf0e82bc7cb4d3d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:54:00.852280  802078 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:54:00.852476  802078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:54:00.852823  802078 config.go:182] Loaded profile config "kindnet-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:54:00.852872  802078 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:54:00.852957  802078 addons.go:70] Setting storage-provisioner=true in profile "kindnet-983381"
	I1206 09:54:00.852975  802078 addons.go:239] Setting addon storage-provisioner=true in "kindnet-983381"
	I1206 09:54:00.853005  802078 host.go:66] Checking if "kindnet-983381" exists ...
	I1206 09:54:00.853562  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:54:00.853908  802078 addons.go:70] Setting default-storageclass=true in profile "kindnet-983381"
	I1206 09:54:00.853931  802078 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-983381"
	I1206 09:54:00.854271  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:54:00.855902  802078 out.go:179] * Verifying Kubernetes components...
	I1206 09:54:00.860893  802078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:54:00.894799  802078 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:54:00.896130  802078 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:54:00.896151  802078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:54:00.896222  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:54:00.896425  802078 addons.go:239] Setting addon default-storageclass=true in "kindnet-983381"
	I1206 09:54:00.896505  802078 host.go:66] Checking if "kindnet-983381" exists ...
	I1206 09:54:00.897191  802078 cli_runner.go:164] Run: docker container inspect kindnet-983381 --format={{.State.Status}}
	I1206 09:54:00.936796  802078 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:54:00.936824  802078 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:54:00.936884  802078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-983381
	I1206 09:54:00.937414  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:54:00.964843  802078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33236 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/kindnet-983381/id_rsa Username:docker}
	I1206 09:54:00.985034  802078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:54:01.035792  802078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:54:01.061379  802078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:54:01.080581  802078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:54:01.215041  802078 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1206 09:54:01.216776  802078 node_ready.go:35] waiting up to 15m0s for node "kindnet-983381" to be "Ready" ...
	I1206 09:54:01.443573  802078 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 06 09:53:41 embed-certs-997968 crio[564]: time="2025-12-06T09:53:41.36087178Z" level=info msg="Removing container: 48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96" id=63ce59a3-d0b2-4483-8573-a4b0389d7907 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:41 embed-certs-997968 crio[564]: time="2025-12-06T09:53:41.372733429Z" level=info msg="Removed container 48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b/dashboard-metrics-scraper" id=63ce59a3-d0b2-4483-8573-a4b0389d7907 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.364094727Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8465008f-1503-492f-8b13-0497ef70845f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.365071328Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ef0fe62-affe-46df-b96c-d08c9c2479a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.366142063Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4e1fda57-e9c0-4475-b7d3-eb5e20428679 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.366293596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.373882647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.374121513Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e721eb050fddd4c4d9e8d7e1b38c6efc61d9219ecb08fa5f133ee73a933bdc86/merged/etc/passwd: no such file or directory"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.374163699Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e721eb050fddd4c4d9e8d7e1b38c6efc61d9219ecb08fa5f133ee73a933bdc86/merged/etc/group: no such file or directory"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.374440581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.41024507Z" level=info msg="Created container b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b: kube-system/storage-provisioner/storage-provisioner" id=4e1fda57-e9c0-4475-b7d3-eb5e20428679 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.41093069Z" level=info msg="Starting container: b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b" id=b0351ef8-1f0c-4e4f-b96b-5b8071b51b42 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:53:42 embed-certs-997968 crio[564]: time="2025-12-06T09:53:42.412764824Z" level=info msg="Started container" PID=1725 containerID=b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b description=kube-system/storage-provisioner/storage-provisioner id=b0351ef8-1f0c-4e4f-b96b-5b8071b51b42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ef42729e937b2ca3e9bba2aedeb2ae00aca5fa5ae7450af6403e2bf88786965
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.958788892Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.964576197Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.964612048Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.964643054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.969937078Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.969965861Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.96998885Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.974072412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.974103292Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.974128665Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.97848855Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:53:51 embed-certs-997968 crio[564]: time="2025-12-06T09:53:51.978519004Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b15abaa4621ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   8ef42729e937b       storage-provisioner                          kube-system
	7ef988bd35261       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   8eb1d165be86e       dashboard-metrics-scraper-6ffb444bf9-ffd4b   kubernetes-dashboard
	65c20f2832484       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   013e22a0e95e9       kubernetes-dashboard-855c9754f9-tc684        kubernetes-dashboard
	80113249bfadd       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   5954fbeed6686       busybox                                      default
	7cab4e729fa4f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   4a32336c7ccfb       coredns-66bc5c9577-kw8nl                     kube-system
	7c3a2deb09c8c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           50 seconds ago      Running             kube-proxy                  0                   50869195acbbf       kube-proxy-m2zpr                             kube-system
	d5f15fc411f8e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   bfbcc644b9c76       kindnet-f84xr                                kube-system
	cf27f79cf6600       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   8ef42729e937b       storage-provisioner                          kube-system
	f0c346e2ecb86       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           53 seconds ago      Running             kube-controller-manager     0                   4aaae2e5af9d1       kube-controller-manager-embed-certs-997968   kube-system
	ccbdbea6e31f7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           53 seconds ago      Running             kube-apiserver              0                   7365f9ba1c6a7       kube-apiserver-embed-certs-997968            kube-system
	9567c8724e790       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           53 seconds ago      Running             kube-scheduler              0                   8893d48f24b35       kube-scheduler-embed-certs-997968            kube-system
	aea22bcd770b6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   3e1285f7168b4       etcd-embed-certs-997968                      kube-system
	
	
	==> coredns [7cab4e729fa4fbf88d02cc827d35d3b458ec55221475e3d84901c71b0aaffabd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41778 - 33262 "HINFO IN 7832111765629505420.4352648602490583271. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057761001s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-997968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-997968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=embed-certs-997968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_52_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:52:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-997968
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:53:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:53:42 +0000   Sat, 06 Dec 2025 09:52:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-997968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                39095a07-7a66-4c4f-9c45-34915880419b
	  Boot ID:                    a3529236-4d1c-4f06-828a-7f970a283d2d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-kw8nl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-997968                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-f84xr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-997968             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-997968    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-m2zpr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-997968             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ffd4b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tc684         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-997968 event: Registered Node embed-certs-997968 in Controller
	  Normal  NodeReady                93s                  kubelet          Node embed-certs-997968 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node embed-certs-997968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node embed-certs-997968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node embed-certs-997968 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node embed-certs-997968 event: Registered Node embed-certs-997968 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e c3 fa ec bb b2 08 06
	[  +3.958070] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce cf 29 ca 87 b6 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.029139] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023931] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +2.047842] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[  +4.031774] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[Dec 6 09:15] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +16.383010] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	[ +32.253846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 33 4b 7b f4 07 66 0b 30 4d 0c cf 08 00
	
	
	==> etcd [aea22bcd770b685f5b36f548f9387928f647a3eb4b9ecbbe8f9c4b71394765c0] <==
	{"level":"warn","ts":"2025-12-06T09:53:15.995225Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.310835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-12-06T09:53:15.995261Z","caller":"traceutil/trace.go:172","msg":"trace[1845936111] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"309.108757ms","start":"2025-12-06T09:53:15.686141Z","end":"2025-12-06T09:53:15.995250Z","steps":["trace[1845936111] 'process raft request'  (duration: 309.060796ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:53:15.995272Z","caller":"traceutil/trace.go:172","msg":"trace[1074145201] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:542; }","duration":"187.369374ms","start":"2025-12-06T09:53:15.807895Z","end":"2025-12-06T09:53:15.995264Z","steps":["trace[1074145201] 'agreement among raft nodes before linearized reading'  (duration: 187.237956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:53:15.995273Z","caller":"traceutil/trace.go:172","msg":"trace[1315797574] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"310.001245ms","start":"2025-12-06T09:53:15.685266Z","end":"2025-12-06T09:53:15.995268Z","steps":["trace[1315797574] 'process raft request'  (duration: 309.841367ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:53:15.995243Z","caller":"traceutil/trace.go:172","msg":"trace[1803967279] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9; range_end:; response_count:1; response_revision:542; }","duration":"310.445017ms","start":"2025-12-06T09:53:15.684787Z","end":"2025-12-06T09:53:15.995232Z","steps":["trace[1803967279] 'agreement among raft nodes before linearized reading'  (duration: 310.259884ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:15.995336Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.685250Z","time spent":"310.056067ms","remote":"127.0.0.1:39720","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":708,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-997968.187e9799313750c3\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-997968.187e9799313750c3\" value_size:630 lease:499225502517217530 >> failure:<>"}
	{"level":"warn","ts":"2025-12-06T09:53:15.995337Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.686131Z","time spent":"309.157654ms","remote":"127.0.0.1:40518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3123,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" mod_revision:537 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" value_size:3041 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" > >"}
	{"level":"warn","ts":"2025-12-06T09:53:15.995362Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.684773Z","time spent":"310.572651ms","remote":"127.0.0.1:40518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":1,"response size":2996,"request content":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:53:16.176313Z","caller":"traceutil/trace.go:172","msg":"trace[1426097075] linearizableReadLoop","detail":"{readStateIndex:575; appliedIndex:575; }","duration":"122.644918ms","start":"2025-12-06T09:53:16.053630Z","end":"2025-12-06T09:53:16.176275Z","steps":["trace[1426097075] 'read index received'  (duration: 122.634725ms)","trace[1426097075] 'applied index is now lower than readState.Index'  (duration: 8.72µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304016Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"307.543592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684\" limit:1 ","response":"range_response_count:1 size:2850"}
	{"level":"info","ts":"2025-12-06T09:53:16.304108Z","caller":"traceutil/trace.go:172","msg":"trace[1519919302] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684; range_end:; response_count:1; response_revision:545; }","duration":"307.630223ms","start":"2025-12-06T09:53:15.996443Z","end":"2025-12-06T09:53:16.304073Z","steps":["trace[1519919302] 'agreement among raft nodes before linearized reading'  (duration: 179.951085ms)","trace[1519919302] 'range keys from in-memory index tree'  (duration: 127.497162ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304031Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"277.810833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kw8nl\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"warn","ts":"2025-12-06T09:53:16.304165Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.996427Z","time spent":"307.726572ms","remote":"127.0.0.1:39946","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":1,"response size":2873,"request content":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:53:16.304185Z","caller":"traceutil/trace.go:172","msg":"trace[1468186240] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-kw8nl; range_end:; response_count:1; response_revision:545; }","duration":"277.970629ms","start":"2025-12-06T09:53:16.026199Z","end":"2025-12-06T09:53:16.304170Z","steps":["trace[1468186240] 'agreement among raft nodes before linearized reading'  (duration: 150.117406ms)","trace[1468186240] 'range keys from in-memory index tree'  (duration: 127.60717ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304236Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.700232ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597539371993396 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" value_size:628 lease:499225502517217530 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:53:16.304369Z","caller":"traceutil/trace.go:172","msg":"trace[1839414587] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"305.827801ms","start":"2025-12-06T09:53:15.998524Z","end":"2025-12-06T09:53:16.304352Z","steps":["trace[1839414587] 'process raft request'  (duration: 177.780029ms)","trace[1839414587] 'compare'  (duration: 127.596242ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:53:16.304425Z","caller":"traceutil/trace.go:172","msg":"trace[658265676] linearizableReadLoop","detail":"{readStateIndex:578; appliedIndex:575; }","duration":"128.05641ms","start":"2025-12-06T09:53:16.176359Z","end":"2025-12-06T09:53:16.304415Z","steps":["trace[658265676] 'read index received'  (duration: 47.98µs)","trace[658265676] 'applied index is now lower than readState.Index'  (duration: 128.007774ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:53:16.304488Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.998508Z","time spent":"305.888398ms","remote":"127.0.0.1:39720","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-997968.187e9799313765dc\" value_size:628 lease:499225502517217530 >> failure:<>"}
	{"level":"info","ts":"2025-12-06T09:53:16.304536Z","caller":"traceutil/trace.go:172","msg":"trace[629104235] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"304.972804ms","start":"2025-12-06T09:53:15.999554Z","end":"2025-12-06T09:53:16.304527Z","steps":["trace[629104235] 'process raft request'  (duration: 304.731479ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:16.304556Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.050666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684\" limit:1 ","response":"range_response_count:1 size:2850"}
	{"level":"info","ts":"2025-12-06T09:53:16.304582Z","caller":"traceutil/trace.go:172","msg":"trace[1814395393] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684; range_end:; response_count:1; response_revision:548; }","duration":"248.080726ms","start":"2025-12-06T09:53:16.056494Z","end":"2025-12-06T09:53:16.304575Z","steps":["trace[1814395393] 'agreement among raft nodes before linearized reading'  (duration: 247.954686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:16.304601Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:15.999535Z","time spent":"305.030209ms","remote":"127.0.0.1:40518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3003,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" mod_revision:534 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" value_size:2916 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" > >"}
	{"level":"info","ts":"2025-12-06T09:53:16.304604Z","caller":"traceutil/trace.go:172","msg":"trace[900876469] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"304.458329ms","start":"2025-12-06T09:53:16.000136Z","end":"2025-12-06T09:53:16.304594Z","steps":["trace[900876469] 'process raft request'  (duration: 304.230407ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:53:16.304660Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:53:16.000124Z","time spent":"304.500922ms","remote":"127.0.0.1:40466","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4918,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:538 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4847 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"info","ts":"2025-12-06T09:53:16.724639Z","caller":"traceutil/trace.go:172","msg":"trace[507108098] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"101.072643ms","start":"2025-12-06T09:53:16.623549Z","end":"2025-12-06T09:53:16.724621Z","steps":["trace[507108098] 'process raft request'  (duration: 38.142461ms)","trace[507108098] 'compare'  (duration: 62.822175ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:54:02 up  2:36,  0 user,  load average: 4.10, 3.34, 3.37
	Linux embed-certs-997968 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5f15fc411f8e34d0fe7d52849aaf1d7a447d0b42b610ca92f5e65f54ca33b72] <==
	I1206 09:53:11.736011       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:53:11.736314       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:53:11.736516       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:53:11.736541       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:53:11.736566       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:53:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:53:11.955650       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:53:11.955696       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:53:11.955709       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:53:11.956027       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 09:53:41.955876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 09:53:41.955888       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 09:53:41.956028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 09:53:41.957146       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1206 09:53:43.556317       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:53:43.556362       1 metrics.go:72] Registering metrics
	I1206 09:53:43.556434       1 controller.go:711] "Syncing nftables rules"
	I1206 09:53:51.958384       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:53:51.958501       1 main.go:301] handling current node
	I1206 09:54:01.963572       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:54:01.963608       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ccbdbea6e31f77d77210cb56e75d243da8b87d3a1bba9fb48502f886fe7cc436] <==
	I1206 09:53:10.940314       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:53:10.940019       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:53:10.940969       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:53:10.940046       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:53:10.940066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:53:10.941656       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:53:10.940572       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:53:10.940918       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:53:10.941976       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:53:10.941985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:53:10.941993       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:53:10.948380       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:53:10.978585       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:53:11.182197       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:53:11.241315       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:53:11.288752       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:53:11.319662       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:53:11.330207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:53:11.377074       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.134.52"}
	I1206 09:53:11.397498       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.17.36"}
	I1206 09:53:11.842860       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:53:14.491788       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:53:14.663106       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:14.663106       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:53:14.875155       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f0c346e2ecb8689cc659d92dd982e72bea92df80d9c19d6fe9b36590adae4c5d] <==
	I1206 09:53:14.260126       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:53:14.273400       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:53:14.276757       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:53:14.278501       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:53:14.284091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:53:14.284091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:53:14.284095       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:53:14.284096       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:53:14.285496       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:53:14.294899       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:53:14.294939       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:53:14.295005       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:53:14.295155       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:53:14.296276       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:53:14.296375       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:53:14.296388       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:53:14.296413       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:53:14.298664       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:53:14.300928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:14.314086       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:53:14.314228       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:53:14.322340       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:53:14.324568       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:53:14.326813       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:53:14.331050       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [7c3a2deb09c8c337db0be2cf134ecc3f8dc26a79db21ff5911915b272f23ebec] <==
	I1206 09:53:11.598804       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:53:11.662654       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:53:11.763629       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:53:11.763699       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 09:53:11.763789       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:53:11.782450       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:53:11.782548       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:53:11.788206       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:53:11.788774       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:53:11.788813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:11.791452       1 config.go:200] "Starting service config controller"
	I1206 09:53:11.791502       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:53:11.791655       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:53:11.791689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:53:11.791756       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:53:11.791764       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:53:11.791782       1 config.go:309] "Starting node config controller"
	I1206 09:53:11.791787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:53:11.791793       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:53:11.892512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:53:11.892512       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:53:11.892543       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9567c8724e7902114f90b0bfd9aeaba8475dd4c7fdffc2b71b9794b8d2429d02] <==
	I1206 09:53:09.643160       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:53:10.902061       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:53:10.902100       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:53:10.907730       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 09:53:10.907770       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1206 09:53:10.907838       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:10.907909       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:53:10.907888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:53:10.908016       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:53:10.908200       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:53:10.908300       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:53:11.008689       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1206 09:53:11.008736       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:53:11.008752       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:53:11 embed-certs-997968 kubelet[730]: E1206 09:53:11.266880     730 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-997968\" already exists" pod="kube-system/etcd-embed-certs-997968"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.704973     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f914f59a-882f-4ac6-babd-0ef19a2aed75-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ffd4b\" (UID: \"f914f59a-882f-4ac6-babd-0ef19a2aed75\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.705032     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztf7n\" (UniqueName: \"kubernetes.io/projected/f914f59a-882f-4ac6-babd-0ef19a2aed75-kube-api-access-ztf7n\") pod \"dashboard-metrics-scraper-6ffb444bf9-ffd4b\" (UID: \"f914f59a-882f-4ac6-babd-0ef19a2aed75\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.805486     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v962\" (UniqueName: \"kubernetes.io/projected/48554eb1-e975-4229-8ee7-2e6aeb6ed273-kube-api-access-2v962\") pod \"kubernetes-dashboard-855c9754f9-tc684\" (UID: \"48554eb1-e975-4229-8ee7-2e6aeb6ed273\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684"
	Dec 06 09:53:15 embed-certs-997968 kubelet[730]: I1206 09:53:15.805595     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/48554eb1-e975-4229-8ee7-2e6aeb6ed273-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-tc684\" (UID: \"48554eb1-e975-4229-8ee7-2e6aeb6ed273\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684"
	Dec 06 09:53:19 embed-certs-997968 kubelet[730]: I1206 09:53:19.289964     730 scope.go:117] "RemoveContainer" containerID="84493d6c463fefaf379ace03cd2f4e02cfbf27abe23fb1b594485606e268f6fb"
	Dec 06 09:53:20 embed-certs-997968 kubelet[730]: I1206 09:53:20.294714     730 scope.go:117] "RemoveContainer" containerID="84493d6c463fefaf379ace03cd2f4e02cfbf27abe23fb1b594485606e268f6fb"
	Dec 06 09:53:20 embed-certs-997968 kubelet[730]: I1206 09:53:20.294854     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:20 embed-certs-997968 kubelet[730]: E1206 09:53:20.295054     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:21 embed-certs-997968 kubelet[730]: I1206 09:53:21.299368     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:21 embed-certs-997968 kubelet[730]: E1206 09:53:21.299611     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:23 embed-certs-997968 kubelet[730]: I1206 09:53:23.316695     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tc684" podStartSLOduration=1.782078582 podStartE2EDuration="8.316669259s" podCreationTimestamp="2025-12-06 09:53:15 +0000 UTC" firstStartedPulling="2025-12-06 09:53:16.545446722 +0000 UTC m=+8.473091879" lastFinishedPulling="2025-12-06 09:53:23.080037395 +0000 UTC m=+15.007682556" observedRunningTime="2025-12-06 09:53:23.316474957 +0000 UTC m=+15.244120137" watchObservedRunningTime="2025-12-06 09:53:23.316669259 +0000 UTC m=+15.244314438"
	Dec 06 09:53:26 embed-certs-997968 kubelet[730]: I1206 09:53:26.571220     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:26 embed-certs-997968 kubelet[730]: E1206 09:53:26.571428     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: I1206 09:53:41.207550     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: I1206 09:53:41.359413     730 scope.go:117] "RemoveContainer" containerID="48634698e2ee90aabf9fb29b8f1477bede616145ba071fdeb02a2f99dd69ce96"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: I1206 09:53:41.359678     730 scope.go:117] "RemoveContainer" containerID="7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	Dec 06 09:53:41 embed-certs-997968 kubelet[730]: E1206 09:53:41.359906     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:42 embed-certs-997968 kubelet[730]: I1206 09:53:42.363658     730 scope.go:117] "RemoveContainer" containerID="cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541"
	Dec 06 09:53:46 embed-certs-997968 kubelet[730]: I1206 09:53:46.571546     730 scope.go:117] "RemoveContainer" containerID="7ef988bd352613c28719b53227c1f510e726f382778e72ae58558de1a8ee8a55"
	Dec 06 09:53:46 embed-certs-997968 kubelet[730]: E1206 09:53:46.571762     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ffd4b_kubernetes-dashboard(f914f59a-882f-4ac6-babd-0ef19a2aed75)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ffd4b" podUID="f914f59a-882f-4ac6-babd-0ef19a2aed75"
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:53:57 embed-certs-997968 systemd[1]: kubelet.service: Consumed 1.681s CPU time.
	
	
	==> kubernetes-dashboard [65c20f28324841a573607a02aa9b5804867835a7e2ec696ee719ec51845d6c3f] <==
	2025/12/06 09:53:23 Using namespace: kubernetes-dashboard
	2025/12/06 09:53:23 Using in-cluster config to connect to apiserver
	2025/12/06 09:53:23 Using secret token for csrf signing
	2025/12/06 09:53:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:53:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:53:23 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:53:23 Generating JWE encryption key
	2025/12/06 09:53:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:53:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:53:23 Initializing JWE encryption key from synchronized object
	2025/12/06 09:53:23 Creating in-cluster Sidecar client
	2025/12/06 09:53:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:53:23 Serving insecurely on HTTP port: 9090
	2025/12/06 09:53:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:53:23 Starting overwatch
	
	
	==> storage-provisioner [b15abaa4621ad0532519e6212d50ffcdce0366950b1104f0e45ec85ac48ff66b] <==
	I1206 09:53:42.425779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:53:42.433354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:53:42.433388       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:53:42.435520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:45.890598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:50.150862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:53.748519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:56.810769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:59.834089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:59.840219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:59.840423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:53:59.840657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-997968_daee1b08-57e6-4e94-8c39-947a0612956c!
	I1206 09:53:59.840719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a3232d1c-1b95-4b7b-ae4c-725079989772", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-997968_daee1b08-57e6-4e94-8c39-947a0612956c became leader
	W1206 09:53:59.843801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:53:59.848120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:53:59.941572       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-997968_daee1b08-57e6-4e94-8c39-947a0612956c!
	W1206 09:54:01.852540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:54:01.860425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cf27f79cf660003825ed87864bf3215b6c1821e837a85725d61f857172afc541] <==
	I1206 09:53:11.563742       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:53:41.566944       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997968 -n embed-certs-997968
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997968 -n embed-certs-997968: exit status 2 (338.692334ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-997968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                    

Test pass (354/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 15.21
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 10.97
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 11.46
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.96
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.76
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.83
31 TestOffline 62.69
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 101.26
40 TestAddons/serial/GCPAuth/Namespaces 0.16
41 TestAddons/serial/GCPAuth/FakeCredentials 9.43
57 TestAddons/StoppedEnableDisable 16.72
58 TestCertOptions 26.62
59 TestCertExpiration 212.87
61 TestForceSystemdFlag 30.67
62 TestForceSystemdEnv 36.6
67 TestErrorSpam/setup 19.18
68 TestErrorSpam/start 0.68
69 TestErrorSpam/status 0.94
70 TestErrorSpam/pause 6.58
71 TestErrorSpam/unpause 5.78
72 TestErrorSpam/stop 8.11
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 64.56
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.29
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.78
84 TestFunctional/serial/CacheCmd/cache/add_local 2.08
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 67.86
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.25
95 TestFunctional/serial/LogsFileCmd 1.28
96 TestFunctional/serial/InvalidService 5.47
98 TestFunctional/parallel/ConfigCmd 0.53
99 TestFunctional/parallel/DashboardCmd 6.23
100 TestFunctional/parallel/DryRun 0.41
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 1.02
106 TestFunctional/parallel/ServiceCmdConnect 10.53
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 46.82
110 TestFunctional/parallel/SSHCmd 0.69
111 TestFunctional/parallel/CpCmd 1.81
112 TestFunctional/parallel/MySQL 21.48
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 1.69
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
122 TestFunctional/parallel/License 0.44
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.23
128 TestFunctional/parallel/ServiceCmd/DeployApp 12.14
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/ServiceCmd/List 1.35
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
137 TestFunctional/parallel/ProfileCmd/profile_list 0.5
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
139 TestFunctional/parallel/ServiceCmd/JSONOutput 1.36
140 TestFunctional/parallel/MountCmd/any-port 8.19
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
142 TestFunctional/parallel/ServiceCmd/Format 0.37
143 TestFunctional/parallel/ServiceCmd/URL 0.37
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
147 TestFunctional/parallel/Version/short 0.07
148 TestFunctional/parallel/Version/components 0.53
149 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
150 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
151 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
152 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
153 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
154 TestFunctional/parallel/ImageCommands/Setup 1.95
155 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.03
158 TestFunctional/parallel/MountCmd/specific-port 2.2
159 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.97
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 38.13
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.15
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.56
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.03
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.52
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 65.89
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.22
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.21
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 5.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 12.32
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.37
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.98
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 11.72
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 24.79
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.69
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.84
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 16.82
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.3
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.84
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.09
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.45
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.21
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.19
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.19
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.35
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.08
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.6
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.43
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.42
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.55
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 9.94
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 1.09
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.79
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.92
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 3.76
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.35
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.35
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.97
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.38
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.37
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.75
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.35
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.51
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.67
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.45
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.11
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 9.22
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.62
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 147.59
266 TestMultiControlPlane/serial/DeployApp 5.55
267 TestMultiControlPlane/serial/PingHostFromPods 1.07
268 TestMultiControlPlane/serial/AddWorkerNode 26.07
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
271 TestMultiControlPlane/serial/CopyFile 17.13
272 TestMultiControlPlane/serial/StopSecondaryNode 13.3
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.25
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.58
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.64
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
279 TestMultiControlPlane/serial/StopCluster 41.35
280 TestMultiControlPlane/serial/RestartCluster 56.39
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
282 TestMultiControlPlane/serial/AddSecondaryNode 37.5
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
288 TestJSONOutput/start/Command 38.59
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.2
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 36.2
314 TestKicCustomNetwork/use_default_bridge_network 22.26
315 TestKicExistingNetwork 25.52
316 TestKicCustomSubnet 23.84
317 TestKicStaticIP 25.62
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 49.96
322 TestMountStart/serial/StartWithMountFirst 7.76
323 TestMountStart/serial/VerifyMountFirst 0.29
324 TestMountStart/serial/StartWithMountSecond 8.01
325 TestMountStart/serial/VerifyMountSecond 0.28
326 TestMountStart/serial/DeleteFirst 1.69
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.25
329 TestMountStart/serial/RestartStopped 7.67
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 67.11
334 TestMultiNode/serial/DeployApp2Nodes 4.59
335 TestMultiNode/serial/PingHostFrom2Pods 0.75
336 TestMultiNode/serial/AddNode 25.59
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.66
339 TestMultiNode/serial/CopyFile 9.89
340 TestMultiNode/serial/StopNode 2.26
341 TestMultiNode/serial/StartAfterStop 7.24
342 TestMultiNode/serial/RestartKeepsNodes 75.51
343 TestMultiNode/serial/DeleteNode 5.26
344 TestMultiNode/serial/StopMultiNode 28.54
345 TestMultiNode/serial/RestartMultiNode 44.75
346 TestMultiNode/serial/ValidateNameConflict 25.41
351 TestPreload 84.3
353 TestScheduledStopUnix 98.49
356 TestInsufficientStorage 12.08
357 TestRunningBinaryUpgrade 46.56
359 TestKubernetesUpgrade 303.96
360 TestMissingContainerUpgrade 94.1
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
364 TestPause/serial/Start 80.14
365 TestNoKubernetes/serial/StartWithK8s 33.44
366 TestNoKubernetes/serial/StartWithStopK8s 27.96
367 TestNoKubernetes/serial/Start 9.07
368 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
369 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
370 TestNoKubernetes/serial/ProfileList 1.84
371 TestNoKubernetes/serial/Stop 1.29
372 TestNoKubernetes/serial/StartNoArgs 7
373 TestPause/serial/SecondStartNoReconfiguration 7.04
374 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
376 TestStoppedBinaryUpgrade/Setup 3.72
377 TestStoppedBinaryUpgrade/Upgrade 290.23
392 TestNetworkPlugins/group/false 3.59
397 TestStartStop/group/old-k8s-version/serial/FirstStart 51.18
398 TestStartStop/group/old-k8s-version/serial/DeployApp 9.25
400 TestStartStop/group/old-k8s-version/serial/Stop 16.07
401 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
402 TestStartStop/group/old-k8s-version/serial/SecondStart 50.94
404 TestStartStop/group/no-preload/serial/FirstStart 48.86
405 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.07
407 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
410 TestStartStop/group/embed-certs/serial/FirstStart 47.2
412 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.47
413 TestStartStop/group/no-preload/serial/DeployApp 10.29
415 TestStartStop/group/no-preload/serial/Stop 16.5
416 TestStoppedBinaryUpgrade/MinikubeLogs 1.27
418 TestStartStop/group/newest-cni/serial/FirstStart 22.79
419 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
420 TestStartStop/group/no-preload/serial/SecondStart 51.48
421 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
422 TestStartStop/group/embed-certs/serial/DeployApp 9.31
423 TestStartStop/group/newest-cni/serial/DeployApp 0
426 TestStartStop/group/newest-cni/serial/Stop 12.69
427 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.3
429 TestStartStop/group/embed-certs/serial/Stop 16.69
430 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
431 TestStartStop/group/newest-cni/serial/SecondStart 10.95
432 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
433 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.15
434 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
435 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
436 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
437 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
439 TestStartStop/group/embed-certs/serial/SecondStart 44
440 TestNetworkPlugins/group/auto/Start 72.65
441 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
442 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
443 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
445 TestNetworkPlugins/group/kindnet/Start 39.34
446 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
447 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
448 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
449 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
450 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
452 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
454 TestNetworkPlugins/group/calico/Start 49.3
455 TestNetworkPlugins/group/custom-flannel/Start 51.48
456 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
457 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
458 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
459 TestNetworkPlugins/group/auto/KubeletFlags 0.38
460 TestNetworkPlugins/group/auto/NetCatPod 9.26
461 TestNetworkPlugins/group/kindnet/DNS 0.13
462 TestNetworkPlugins/group/kindnet/Localhost 0.09
463 TestNetworkPlugins/group/kindnet/HairPin 0.09
464 TestNetworkPlugins/group/auto/DNS 0.14
465 TestNetworkPlugins/group/auto/Localhost 0.12
466 TestNetworkPlugins/group/auto/HairPin 0.12
467 TestNetworkPlugins/group/flannel/Start 48.65
468 TestNetworkPlugins/group/bridge/Start 71.08
469 TestNetworkPlugins/group/calico/ControllerPod 6.01
470 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
471 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.19
472 TestNetworkPlugins/group/calico/KubeletFlags 0.35
473 TestNetworkPlugins/group/calico/NetCatPod 9.3
474 TestNetworkPlugins/group/custom-flannel/DNS 0.12
475 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
476 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
477 TestNetworkPlugins/group/calico/DNS 0.12
478 TestNetworkPlugins/group/calico/Localhost 0.11
479 TestNetworkPlugins/group/calico/HairPin 0.11
480 TestNetworkPlugins/group/enable-default-cni/Start 59.49
481 TestNetworkPlugins/group/flannel/ControllerPod 6.01
482 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
483 TestNetworkPlugins/group/flannel/NetCatPod 8.22
484 TestNetworkPlugins/group/flannel/DNS 0.15
485 TestNetworkPlugins/group/flannel/Localhost 0.09
486 TestNetworkPlugins/group/flannel/HairPin 0.09
487 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
488 TestNetworkPlugins/group/bridge/NetCatPod 7.19
489 TestNetworkPlugins/group/bridge/DNS 0.13
490 TestNetworkPlugins/group/bridge/Localhost 0.1
491 TestNetworkPlugins/group/bridge/HairPin 0.1
492 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
493 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.17
494 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
495 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
496 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (15.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-449563 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-449563 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.206833938s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (15.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 09:12:04.396784  502867 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1206 09:12:04.396882  502867 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-449563
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-449563: exit status 85 (76.793501ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-449563 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-449563 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:11:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:11:49.244390  502879 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:11:49.244503  502879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:49.244512  502879 out.go:374] Setting ErrFile to fd 2...
	I1206 09:11:49.244516  502879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:49.244698  502879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	W1206 09:11:49.244815  502879 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22047-499330/.minikube/config/config.json: open /home/jenkins/minikube-integration/22047-499330/.minikube/config/config.json: no such file or directory
	I1206 09:11:49.245478  502879 out.go:368] Setting JSON to true
	I1206 09:11:49.246924  502879 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6853,"bootTime":1765005456,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:11:49.246980  502879 start.go:143] virtualization: kvm guest
	I1206 09:11:49.250961  502879 out.go:99] [download-only-449563] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1206 09:11:49.251066  502879 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 09:11:49.251126  502879 notify.go:221] Checking for updates...
	I1206 09:11:49.252053  502879 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:11:49.253121  502879 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:11:49.254188  502879 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:11:49.255135  502879 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:11:49.256038  502879 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:11:49.257791  502879 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:11:49.258003  502879 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:11:49.281375  502879 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:11:49.281509  502879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:49.336238  502879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-06 09:11:49.326245221 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:49.336384  502879 docker.go:319] overlay module found
	I1206 09:11:49.337967  502879 out.go:99] Using the docker driver based on user configuration
	I1206 09:11:49.337991  502879 start.go:309] selected driver: docker
	I1206 09:11:49.337997  502879 start.go:927] validating driver "docker" against <nil>
	I1206 09:11:49.338094  502879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:49.389706  502879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-06 09:11:49.379812792 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:49.389919  502879 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:11:49.390697  502879 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 09:11:49.390957  502879 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:11:49.392426  502879 out.go:171] Using Docker driver with root privileges
	I1206 09:11:49.393423  502879 cni.go:84] Creating CNI manager for ""
	I1206 09:11:49.393513  502879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:11:49.393527  502879 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:11:49.393619  502879 start.go:353] cluster config:
	{Name:download-only-449563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-449563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:49.394579  502879 out.go:99] Starting "download-only-449563" primary control-plane node in "download-only-449563" cluster
	I1206 09:11:49.394593  502879 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:11:49.395587  502879 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:11:49.395644  502879 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:11:49.395715  502879 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:11:49.412035  502879 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:11:49.412209  502879 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 09:11:49.412314  502879 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:11:49.496491  502879 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:11:49.496535  502879 cache.go:65] Caching tarball of preloaded images
	I1206 09:11:49.496699  502879 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:11:49.498194  502879 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 09:11:49.498209  502879 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 09:11:49.605409  502879 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1206 09:11:49.605537  502879 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:01.743077  502879 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1206 09:12:01.743502  502879 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/download-only-449563/config.json ...
	I1206 09:12:01.743546  502879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/download-only-449563/config.json: {Name:mkdc8de6cae8f36c52a54b817a9f7752009d8e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:01.743757  502879 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:12:01.743952  502879 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-449563 host does not exist
	  To start a cluster, run: "minikube start -p download-only-449563"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-449563
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (10.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-757324 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-757324 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.969293772s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (10.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 09:12:15.832483  502867 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 09:12:15.832524  502867 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-757324
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-757324: exit status 85 (75.094784ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-449563 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-449563 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-449563                                                                                                                                                   │ download-only-449563 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-757324 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-757324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:12:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:12:04.917368  503262 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:12:04.917649  503262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:04.917658  503262 out.go:374] Setting ErrFile to fd 2...
	I1206 09:12:04.917663  503262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:04.917875  503262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:12:04.918375  503262 out.go:368] Setting JSON to true
	I1206 09:12:04.919237  503262 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6869,"bootTime":1765005456,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:12:04.919291  503262 start.go:143] virtualization: kvm guest
	I1206 09:12:04.921312  503262 out.go:99] [download-only-757324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:12:04.921436  503262 notify.go:221] Checking for updates...
	I1206 09:12:04.922678  503262 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:12:04.923820  503262 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:12:04.925093  503262 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:12:04.926404  503262 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:12:04.927547  503262 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:12:04.929580  503262 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:12:04.929792  503262 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:12:04.953585  503262 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:12:04.953700  503262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:05.008594  503262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:12:04.999183843 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:05.008690  503262 docker.go:319] overlay module found
	I1206 09:12:05.010235  503262 out.go:99] Using the docker driver based on user configuration
	I1206 09:12:05.010267  503262 start.go:309] selected driver: docker
	I1206 09:12:05.010273  503262 start.go:927] validating driver "docker" against <nil>
	I1206 09:12:05.010354  503262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:05.066424  503262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:12:05.056917339 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:05.066765  503262 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:12:05.067291  503262 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 09:12:05.067506  503262 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:12:05.069223  503262 out.go:171] Using Docker driver with root privileges
	I1206 09:12:05.070385  503262 cni.go:84] Creating CNI manager for ""
	I1206 09:12:05.070444  503262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:12:05.070467  503262 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:12:05.070550  503262 start.go:353] cluster config:
	{Name:download-only-757324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-757324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:12:05.071676  503262 out.go:99] Starting "download-only-757324" primary control-plane node in "download-only-757324" cluster
	I1206 09:12:05.071693  503262 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:12:05.072641  503262 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:12:05.072671  503262 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:05.072737  503262 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:12:05.089269  503262 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:12:05.089448  503262 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 09:12:05.089483  503262 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 09:12:05.089490  503262 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 09:12:05.089510  503262 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 09:12:05.487679  503262 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:05.487719  503262 cache.go:65] Caching tarball of preloaded images
	I1206 09:12:05.487910  503262 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:05.489775  503262 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1206 09:12:05.489796  503262 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 09:12:05.612673  503262 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1206 09:12:05.612725  503262 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-757324 host does not exist
	  To start a cluster, run: "minikube start -p download-only-757324"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-757324
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (11.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-215937 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-215937 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.463560088s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (11.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 09:12:27.748236  502867 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1206 09:12:27.748289  502867 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-215937
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-215937: exit status 85 (961.727709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-449563 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-449563 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-449563                                                                                                                                                          │ download-only-449563 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-757324 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-757324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-757324                                                                                                                                                          │ download-only-757324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-215937 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-215937 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:12:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:12:16.339812  503632 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:12:16.339925  503632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:16.339940  503632 out.go:374] Setting ErrFile to fd 2...
	I1206 09:12:16.339946  503632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:16.340131  503632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:12:16.340624  503632 out.go:368] Setting JSON to true
	I1206 09:12:16.341579  503632 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6880,"bootTime":1765005456,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:12:16.341642  503632 start.go:143] virtualization: kvm guest
	I1206 09:12:16.343514  503632 out.go:99] [download-only-215937] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:12:16.343703  503632 notify.go:221] Checking for updates...
	I1206 09:12:16.344786  503632 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:12:16.346063  503632 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:12:16.347036  503632 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:12:16.348075  503632 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:12:16.349079  503632 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:12:16.351022  503632 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:12:16.351355  503632 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:12:16.376134  503632 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:12:16.376291  503632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:16.433302  503632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:12:16.421970002 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:16.433427  503632 docker.go:319] overlay module found
	I1206 09:12:16.434965  503632 out.go:99] Using the docker driver based on user configuration
	I1206 09:12:16.435002  503632 start.go:309] selected driver: docker
	I1206 09:12:16.435011  503632 start.go:927] validating driver "docker" against <nil>
	I1206 09:12:16.435090  503632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:12:16.488519  503632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:12:16.479439356 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:12:16.488725  503632 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:12:16.489414  503632 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 09:12:16.489644  503632 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:12:16.491326  503632 out.go:171] Using Docker driver with root privileges
	I1206 09:12:16.492320  503632 cni.go:84] Creating CNI manager for ""
	I1206 09:12:16.492391  503632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:12:16.492402  503632 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:12:16.492512  503632 start.go:353] cluster config:
	{Name:download-only-215937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-215937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:12:16.493579  503632 out.go:99] Starting "download-only-215937" primary control-plane node in "download-only-215937" cluster
	I1206 09:12:16.493600  503632 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:12:16.494485  503632 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:12:16.494512  503632 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:12:16.494554  503632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:12:16.513059  503632 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:12:16.513204  503632 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 09:12:16.513232  503632 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 09:12:16.513247  503632 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 09:12:16.513261  503632 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 09:12:16.593875  503632 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:16.593916  503632 cache.go:65] Caching tarball of preloaded images
	I1206 09:12:16.594125  503632 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:12:16.595691  503632 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1206 09:12:16.595716  503632 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 09:12:16.704965  503632 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1206 09:12:16.705023  503632 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22047-499330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-215937 host does not exist
	  To start a cluster, run: "minikube start -p download-only-215937"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-215937
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-416316 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-416316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-416316
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 09:12:30.451819  502867 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-279469 --alsologtostderr --binary-mirror http://127.0.0.1:43659 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-279469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-279469
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (62.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-120041 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-120041 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m0.118704369s)
helpers_test.go:175: Cleaning up "offline-crio-120041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-120041
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-120041: (2.570275417s)
--- PASS: TestOffline (62.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-101630
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-101630: exit status 85 (71.931235ms)

                                                
                                                
-- stdout --
	* Profile "addons-101630" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-101630"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-101630
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-101630: exit status 85 (72.23515ms)

                                                
                                                
-- stdout --
	* Profile "addons-101630" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-101630"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (101.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-101630 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-101630 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m41.262589368s)
--- PASS: TestAddons/Setup (101.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-101630 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-101630 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-101630 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-101630 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [acbd545b-4ccf-4516-a223-f5a9a8013869] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [acbd545b-4ccf-4516-a223-f5a9a8013869] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003983512s
addons_test.go:694: (dbg) Run:  kubectl --context addons-101630 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-101630 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-101630 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-101630
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-101630: (16.414580117s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-101630
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-101630
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-101630
--- PASS: TestAddons/StoppedEnableDisable (16.72s)

                                                
                                    
x
+
TestCertOptions (26.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-546381 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1206 09:48:42.663520  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-546381 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.505419152s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-546381 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-546381 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-546381 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-546381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-546381
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-546381: (2.416130829s)
--- PASS: TestCertOptions (26.62s)

                                                
                                    
x
+
TestCertExpiration (212.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-669264 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-669264 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.428404365s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-669264 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.588867596s)
helpers_test.go:175: Cleaning up "cert-expiration-669264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-669264
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-669264: (2.854155304s)
--- PASS: TestCertExpiration (212.87s)

                                                
                                    
x
+
TestForceSystemdFlag (30.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-996303 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-996303 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.244503495s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-996303 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-996303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-996303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-996303: (3.085315497s)
--- PASS: TestForceSystemdFlag (30.67s)

                                                
                                    
x
+
TestForceSystemdEnv (36.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-168450 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-168450 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.964052073s)
helpers_test.go:175: Cleaning up "force-systemd-env-168450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-168450
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-168450: (2.63702849s)
--- PASS: TestForceSystemdEnv (36.60s)

                                                
                                    
x
+
TestErrorSpam/setup (19.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-603104 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-603104 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-603104 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-603104 --driver=docker  --container-runtime=crio: (19.184151294s)
--- PASS: TestErrorSpam/setup (19.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (6.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause: exit status 80 (2.372710916s)

                                                
                                                
-- stdout --
	* Pausing node nospam-603104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause: exit status 80 (2.158998465s)

                                                
                                                
-- stdout --
	* Pausing node nospam-603104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause: exit status 80 (2.051948469s)

                                                
                                                
-- stdout --
	* Pausing node nospam-603104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause: exit status 80 (1.721071683s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-603104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause: exit status 80 (2.006197724s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-603104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause: exit status 80 (2.048062966s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-603104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:17:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.78s)

                                                
                                    
x
+
TestErrorSpam/stop (8.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 stop: (7.900855679s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-603104 --log_dir /tmp/nospam-603104 stop
--- PASS: TestErrorSpam/stop (8.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/test/nested/copy/502867/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857859 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1206 09:19:13.227928  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:13.237011  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:13.248878  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:13.270244  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:13.311580  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:13.393012  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:13.554559  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:13.876249  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:14.518275  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:15.799853  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-857859 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m4.559238667s)
--- PASS: TestFunctional/serial/StartWithProxy (64.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 09:19:17.576990  502867 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857859 --alsologtostderr -v=8
E1206 09:19:18.361935  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:23.483387  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-857859 --alsologtostderr -v=8: (6.292654566s)
functional_test.go:678: soft start took 6.293492271s for "functional-857859" cluster.
I1206 09:19:23.870062  502867 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-857859 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 cache add registry.k8s.io/pause:3.1: (1.031305629s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-857859 /tmp/TestFunctionalserialCacheCmdcacheadd_local2229051116/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cache add minikube-local-cache-test:functional-857859
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 cache add minikube-local-cache-test:functional-857859: (1.74509124s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cache delete minikube-local-cache-test:functional-857859
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-857859
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.784384ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 kubectl -- --context functional-857859 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-857859 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (67.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857859 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 09:19:33.725676  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:54.207136  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:20:35.168845  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-857859 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.862596616s)
functional_test.go:776: restart took 1m7.86274s for "functional-857859" cluster.
I1206 09:20:39.108666  502867 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (67.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-857859 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 logs: (1.245876682s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 logs --file /tmp/TestFunctionalserialLogsFileCmd2992361702/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 logs --file /tmp/TestFunctionalserialLogsFileCmd2992361702/001/logs.txt: (1.283158379s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-857859 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-857859
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-857859: exit status 115 (343.794994ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31399 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-857859 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-857859 delete -f testdata/invalidsvc.yaml: (1.945650437s)
--- PASS: TestFunctional/serial/InvalidService (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 config get cpus: exit status 14 (109.573685ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 config get cpus: exit status 14 (88.063313ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-857859 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-857859 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 538123: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857859 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-857859 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.611616ms)

                                                
                                                
-- stdout --
	* [functional-857859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:20:58.283508  537264 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:20:58.283799  537264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:20:58.283809  537264 out.go:374] Setting ErrFile to fd 2...
	I1206 09:20:58.283814  537264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:20:58.284008  537264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:20:58.284450  537264 out.go:368] Setting JSON to false
	I1206 09:20:58.285526  537264 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7402,"bootTime":1765005456,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:20:58.285589  537264 start.go:143] virtualization: kvm guest
	I1206 09:20:58.287614  537264 out.go:179] * [functional-857859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:20:58.288896  537264 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:20:58.288948  537264 notify.go:221] Checking for updates...
	I1206 09:20:58.291263  537264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:20:58.292434  537264 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:20:58.293543  537264 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:20:58.296731  537264 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:20:58.297983  537264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:20:58.299618  537264 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:20:58.300429  537264 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:20:58.330151  537264 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:20:58.330258  537264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:20:58.388897  537264 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:20:58.378148653 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:20:58.389012  537264 docker.go:319] overlay module found
	I1206 09:20:58.390585  537264 out.go:179] * Using the docker driver based on existing profile
	I1206 09:20:58.391754  537264 start.go:309] selected driver: docker
	I1206 09:20:58.391775  537264 start.go:927] validating driver "docker" against &{Name:functional-857859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-857859 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:20:58.391882  537264 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:20:58.393563  537264 out.go:203] 
	W1206 09:20:58.394541  537264 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:20:58.395570  537264 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857859 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857859 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-857859 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.777912ms)

                                                
                                                
-- stdout --
	* [functional-857859] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:20:58.116297  537173 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:20:58.116556  537173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:20:58.116565  537173 out.go:374] Setting ErrFile to fd 2...
	I1206 09:20:58.116569  537173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:20:58.116896  537173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:20:58.117315  537173 out.go:368] Setting JSON to false
	I1206 09:20:58.118303  537173 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7402,"bootTime":1765005456,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:20:58.118364  537173 start.go:143] virtualization: kvm guest
	I1206 09:20:58.120281  537173 out.go:179] * [functional-857859] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:20:58.121595  537173 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:20:58.121684  537173 notify.go:221] Checking for updates...
	I1206 09:20:58.123923  537173 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:20:58.124963  537173 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:20:58.126073  537173 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:20:58.127174  537173 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:20:58.128278  537173 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:20:58.130026  537173 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:20:58.130617  537173 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:20:58.154392  537173 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:20:58.154507  537173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:20:58.211759  537173 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:20:58.199825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:20:58.211856  537173 docker.go:319] overlay module found
	I1206 09:20:58.213405  537173 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:20:58.214316  537173 start.go:309] selected driver: docker
	I1206 09:20:58.214329  537173 start.go:927] validating driver "docker" against &{Name:functional-857859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-857859 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:20:58.214414  537173 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:20:58.215948  537173 out.go:203] 
	W1206 09:20:58.216906  537173 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:20:58.217870  537173 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-857859 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-857859 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7tfgq" [7f176d07-9dd4-4e66-8b41-5e7b8d290d72] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-7tfgq" [7f176d07-9dd4-4e66-8b41-5e7b8d290d72] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003713905s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31639
functional_test.go:1680: http://192.168.49.2:31639: success! body:
Request served by hello-node-connect-7d85dfc575-7tfgq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31639
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f6550c55-1477-4b1e-8222-58c5fc7b6d74] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003427309s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-857859 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-857859 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-857859 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-857859 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:20:54.058804  502867 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5b05c6b9-13da-4d67-87ad-92b856425fb8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolume "pvc-674faa7b-928a-4f6b-b0e5-b13dd3f54807" not found. not found)
helpers_test.go:352: "sp-pod" [5b05c6b9-13da-4d67-87ad-92b856425fb8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5b05c6b9-13da-4d67-87ad-92b856425fb8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.004392894s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-857859 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-857859 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-857859 delete -f testdata/storage-provisioner/pod.yaml: (1.172753992s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-857859 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:21:26.440748  502867 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1c9f7eb6-918d-4f1b-aae8-3463908d9140] Pending
helpers_test.go:352: "sp-pod" [1c9f7eb6-918d-4f1b-aae8-3463908d9140] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1c9f7eb6-918d-4f1b-aae8-3463908d9140] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003330089s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-857859 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh -n functional-857859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cp functional-857859:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4272604603/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh -n functional-857859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh -n functional-857859 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-857859 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-hbcg6" [fa4f2335-2026-4255-bba3-cb109b200bae] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-hbcg6" [fa4f2335-2026-4255-bba3-cb109b200bae] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.004427404s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-857859 exec mysql-5bb876957f-hbcg6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-857859 exec mysql-5bb876957f-hbcg6 -- mysql -ppassword -e "show databases;": exit status 1 (99.735664ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:21:21.064037  502867 retry.go:31] will retry after 572.238682ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-857859 exec mysql-5bb876957f-hbcg6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-857859 exec mysql-5bb876957f-hbcg6 -- mysql -ppassword -e "show databases;": exit status 1 (118.8616ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:21:21.755649  502867 retry.go:31] will retry after 2.196875534s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-857859 exec mysql-5bb876957f-hbcg6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-857859 exec mysql-5bb876957f-hbcg6 -- mysql -ppassword -e "show databases;": exit status 1 (94.650057ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:21:24.048078  502867 retry.go:31] will retry after 3.137815066s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-857859 exec mysql-5bb876957f-hbcg6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/502867/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo cat /etc/test/nested/copy/502867/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/502867.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo cat /etc/ssl/certs/502867.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/502867.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo cat /usr/share/ca-certificates/502867.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5028672.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo cat /etc/ssl/certs/5028672.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5028672.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo cat /usr/share/ca-certificates/5028672.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-857859 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh "sudo systemctl is-active docker": exit status 1 (292.677782ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh "sudo systemctl is-active containerd": exit status 1 (327.805473ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-857859 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-857859 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-857859 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-857859 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 535517: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-857859 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-857859 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [58a3e363-504f-4ad6-99fb-9d15977cba0d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [58a3e363-504f-4ad6-99fb-9d15977cba0d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003162769s
I1206 09:20:57.895616  502867 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-857859 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-857859 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-dvp4n" [275885f8-ccbb-402b-91ea-08b4db910263] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-dvp4n" [275885f8-ccbb-402b-91ea-08b4db910263] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003729446s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-857859 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.230.223 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-857859 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 service list: (1.352381306s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "419.549557ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "82.322099ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "411.703582ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "89.150512ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 service list -o json: (1.364092691s)
functional_test.go:1504: Took "1.364204964s" to run "out/minikube-linux-amd64 -p functional-857859 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdany-port3121199091/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765012862867989268" to /tmp/TestFunctionalparallelMountCmdany-port3121199091/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765012862867989268" to /tmp/TestFunctionalparallelMountCmdany-port3121199091/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765012862867989268" to /tmp/TestFunctionalparallelMountCmdany-port3121199091/001/test-1765012862867989268
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.926682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:21:03.213251  502867 retry.go:31] will retry after 529.95154ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:21 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:21 test-1765012862867989268
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh cat /mount-9p/test-1765012862867989268
2025/12/06 09:21:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-857859 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6cd25e1b-e83c-44e1-a025-ac42b200e57b] Pending
helpers_test.go:352: "busybox-mount" [6cd25e1b-e83c-44e1-a025-ac42b200e57b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6cd25e1b-e83c-44e1-a025-ac42b200e57b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6cd25e1b-e83c-44e1-a025-ac42b200e57b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004191886s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-857859 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdany-port3121199091/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30850
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30850
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857859 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-857859
localhost/kicbase/echo-server:functional-857859
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857859 image ls --format short --alsologtostderr:
I1206 09:21:19.118738  544339 out.go:360] Setting OutFile to fd 1 ...
I1206 09:21:19.119054  544339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.119064  544339 out.go:374] Setting ErrFile to fd 2...
I1206 09:21:19.119068  544339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.119273  544339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:21:19.119904  544339 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.120001  544339 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.120494  544339 cli_runner.go:164] Run: docker container inspect functional-857859 --format={{.State.Status}}
I1206 09:21:19.139660  544339 ssh_runner.go:195] Run: systemctl --version
I1206 09:21:19.139705  544339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-857859
I1206 09:21:19.160251  544339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-857859/id_rsa Username:docker}
I1206 09:21:19.255075  544339 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857859 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-857859  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-857859  │ 3beb8af58991b │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857859 image ls --format table --alsologtostderr:
I1206 09:21:19.602586  544619 out.go:360] Setting OutFile to fd 1 ...
I1206 09:21:19.602843  544619 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.602854  544619 out.go:374] Setting ErrFile to fd 2...
I1206 09:21:19.602861  544619 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.603070  544619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:21:19.603665  544619 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.603783  544619 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.604248  544619 cli_runner.go:164] Run: docker container inspect functional-857859 --format={{.State.Status}}
I1206 09:21:19.623654  544619 ssh_runner.go:195] Run: systemctl --version
I1206 09:21:19.623723  544619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-857859
I1206 09:21:19.643407  544619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-857859/id_rsa Username:docker}
I1206 09:21:19.739133  544619 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857859 image ls --format json --alsologtostderr:
[{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e0
4303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-857859"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnet
d@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"3beb8af58991b39e8db2593350af5e6612b1e866a77fb7b7ea6128568dc21fee","repoDigests":["localhost/minikube-local-cache-test@sha256:b41674633f701617da90
7c401def7fcc5eb1f7653f86df0901693a6c201d9dea"],"repoTags":["localhost/minikube-local-cache-test:functional-857859"],"size":"3330"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72
668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-cont
roller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4
c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad7
51196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857859 image ls --format json --alsologtostderr:
I1206 09:21:19.364390  544456 out.go:360] Setting OutFile to fd 1 ...
I1206 09:21:19.364669  544456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.364680  544456 out.go:374] Setting ErrFile to fd 2...
I1206 09:21:19.364685  544456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.364944  544456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:21:19.365637  544456 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.365741  544456 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.366196  544456 cli_runner.go:164] Run: docker container inspect functional-857859 --format={{.State.Status}}
I1206 09:21:19.388531  544456 ssh_runner.go:195] Run: systemctl --version
I1206 09:21:19.388599  544456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-857859
I1206 09:21:19.408302  544456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-857859/id_rsa Username:docker}
I1206 09:21:19.510128  544456 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857859 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-857859
size: "4944818"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3beb8af58991b39e8db2593350af5e6612b1e866a77fb7b7ea6128568dc21fee
repoDigests:
- localhost/minikube-local-cache-test@sha256:b41674633f701617da907c401def7fcc5eb1f7653f86df0901693a6c201d9dea
repoTags:
- localhost/minikube-local-cache-test:functional-857859
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857859 image ls --format yaml --alsologtostderr:
I1206 09:21:19.124006  544340 out.go:360] Setting OutFile to fd 1 ...
I1206 09:21:19.124291  544340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.124303  544340 out.go:374] Setting ErrFile to fd 2...
I1206 09:21:19.124306  544340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.124524  544340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:21:19.125169  544340 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.125280  544340 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.125770  544340 cli_runner.go:164] Run: docker container inspect functional-857859 --format={{.State.Status}}
I1206 09:21:19.144690  544340 ssh_runner.go:195] Run: systemctl --version
I1206 09:21:19.144765  544340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-857859
I1206 09:21:19.163638  544340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-857859/id_rsa Username:docker}
I1206 09:21:19.260450  544340 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh pgrep buildkitd: exit status 1 (287.192104ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image build -t localhost/my-image:functional-857859 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 image build -t localhost/my-image:functional-857859 testdata/build --alsologtostderr: (3.569014053s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857859 image build -t localhost/my-image:functional-857859 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cb8c9890bff
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-857859
--> 2651c28fbff
Successfully tagged localhost/my-image:functional-857859
2651c28fbffa77a4709f4d65e334a1f168d8a20937875d8fe68978723326373e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857859 image build -t localhost/my-image:functional-857859 testdata/build --alsologtostderr:
I1206 09:21:19.648191  544630 out.go:360] Setting OutFile to fd 1 ...
I1206 09:21:19.648511  544630 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.648525  544630 out.go:374] Setting ErrFile to fd 2...
I1206 09:21:19.648529  544630 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:21:19.648755  544630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:21:19.649319  544630 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.649986  544630 config.go:182] Loaded profile config "functional-857859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:21:19.650504  544630 cli_runner.go:164] Run: docker container inspect functional-857859 --format={{.State.Status}}
I1206 09:21:19.668513  544630 ssh_runner.go:195] Run: systemctl --version
I1206 09:21:19.668560  544630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-857859
I1206 09:21:19.686275  544630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-857859/id_rsa Username:docker}
I1206 09:21:19.780173  544630 build_images.go:162] Building image from path: /tmp/build.884384660.tar
I1206 09:21:19.780265  544630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:21:19.788779  544630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.884384660.tar
I1206 09:21:19.792904  544630 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.884384660.tar: stat -c "%s %y" /var/lib/minikube/build/build.884384660.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.884384660.tar': No such file or directory
I1206 09:21:19.792929  544630 ssh_runner.go:362] scp /tmp/build.884384660.tar --> /var/lib/minikube/build/build.884384660.tar (3072 bytes)
I1206 09:21:19.811792  544630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.884384660
I1206 09:21:19.819992  544630 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.884384660 -xf /var/lib/minikube/build/build.884384660.tar
I1206 09:21:19.828920  544630 crio.go:315] Building image: /var/lib/minikube/build/build.884384660
I1206 09:21:19.828998  544630 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-857859 /var/lib/minikube/build/build.884384660 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 09:21:23.131366  544630 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-857859 /var/lib/minikube/build/build.884384660 --cgroup-manager=cgroupfs: (3.302325159s)
I1206 09:21:23.131442  544630 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.884384660
I1206 09:21:23.139959  544630 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.884384660.tar
I1206 09:21:23.147836  544630 build_images.go:218] Built localhost/my-image:functional-857859 from /tmp/build.884384660.tar
I1206 09:21:23.147875  544630 build_images.go:134] succeeded building to: functional-857859
I1206 09:21:23.147880  544630 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.933064946s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-857859
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image load --daemon kicbase/echo-server:functional-857859 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image load --daemon kicbase/echo-server:functional-857859 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-857859
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image load --daemon kicbase/echo-server:functional-857859 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 image load --daemon kicbase/echo-server:functional-857859 --alsologtostderr: (4.838101029s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdspecific-port1220532044/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.151887ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:21:11.413596  502867 retry.go:31] will retry after 696.848363ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdspecific-port1220532044/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh "sudo umount -f /mount-9p": exit status 1 (319.758244ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-857859 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdspecific-port1220532044/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2680364105/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2680364105/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2680364105/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T" /mount1: exit status 1 (402.401499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:21:13.660753  502867 retry.go:31] will retry after 591.952293ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-857859 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2680364105/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2680364105/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857859 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2680364105/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image save kicbase/echo-server:functional-857859 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image rm kicbase/echo-server:functional-857859 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-857859
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-857859 image save --daemon kicbase/echo-server:functional-857859 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-857859 image save --daemon kicbase/echo-server:functional-857859 --alsologtostderr: (1.919947501s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-857859
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.97s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-857859
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-857859
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-857859
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-499330/.minikube/files/etc/test/nested/copy/502867/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (38.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326325 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 09:21:57.090286  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-326325 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (38.133313053s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (38.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 09:22:15.860366  502867 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326325 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-326325 --alsologtostderr -v=8: (6.147449378s)
functional_test.go:678: soft start took 6.148194131s for "functional-326325" cluster.
I1206 09:22:22.008537  502867 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-326325 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1022352286/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cache add minikube-local-cache-test:functional-326325
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-326325 cache add minikube-local-cache-test:functional-326325: (1.752065452s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cache delete minikube-local-cache-test:functional-326325
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-326325
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.423389ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 kubectl -- --context functional-326325 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-326325 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (65.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326325 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-326325 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m5.887220837s)
functional_test.go:776: restart took 1m5.887376527s for "functional-326325" cluster.
I1206 09:23:34.900182  502867 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (65.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-326325 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-326325 logs: (1.218707549s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3304178361/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-326325 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3304178361/001/logs.txt: (1.207541743s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (5.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-326325 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-326325
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-326325: exit status 115 (346.120596ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32202 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-326325 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-326325 delete -f testdata/invalidsvc.yaml: (1.54325294s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (5.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (12.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-326325 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-326325 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 562287: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (12.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326325 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-326325 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (156.032745ms)

                                                
                                                
-- stdout --
	* [functional-326325] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:23:52.532280  558633 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:23:52.532368  558633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:23:52.532374  558633 out.go:374] Setting ErrFile to fd 2...
	I1206 09:23:52.532380  558633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:23:52.532582  558633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:23:52.533009  558633 out.go:368] Setting JSON to false
	I1206 09:23:52.534026  558633 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7576,"bootTime":1765005456,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:23:52.534077  558633 start.go:143] virtualization: kvm guest
	I1206 09:23:52.535962  558633 out.go:179] * [functional-326325] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:23:52.537116  558633 notify.go:221] Checking for updates...
	I1206 09:23:52.537133  558633 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:23:52.538196  558633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:23:52.539256  558633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:23:52.540225  558633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:23:52.541196  558633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:23:52.542203  558633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:23:52.543605  558633 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:23:52.544111  558633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:23:52.567349  558633 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:23:52.567432  558633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:23:52.621182  558633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:61 SystemTime:2025-12-06 09:23:52.611489894 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:23:52.621292  558633 docker.go:319] overlay module found
	I1206 09:23:52.623284  558633 out.go:179] * Using the docker driver based on existing profile
	I1206 09:23:52.624192  558633 start.go:309] selected driver: docker
	I1206 09:23:52.624205  558633 start.go:927] validating driver "docker" against &{Name:functional-326325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326325 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:23:52.624294  558633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:23:52.625803  558633 out.go:203] 
	W1206 09:23:52.626704  558633 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:23:52.627670  558633 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326325 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326325 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-326325 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (170.926682ms)

                                                
                                                
-- stdout --
	* [functional-326325] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:23:52.905603  558857 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:23:52.905714  558857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:23:52.905725  558857 out.go:374] Setting ErrFile to fd 2...
	I1206 09:23:52.905732  558857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:23:52.906148  558857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:23:52.906729  558857 out.go:368] Setting JSON to false
	I1206 09:23:52.907960  558857 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7577,"bootTime":1765005456,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:23:52.908021  558857 start.go:143] virtualization: kvm guest
	I1206 09:23:52.909535  558857 out.go:179] * [functional-326325] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:23:52.910854  558857 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:23:52.910898  558857 notify.go:221] Checking for updates...
	I1206 09:23:52.913252  558857 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:23:52.914271  558857 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:23:52.915358  558857 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:23:52.916328  558857 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:23:52.917392  558857 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:23:52.918883  558857 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:23:52.919618  558857 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:23:52.943763  558857 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:23:52.943846  558857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:23:53.003420  558857 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:59 SystemTime:2025-12-06 09:23:52.993502898 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:23:53.003605  558857 docker.go:319] overlay module found
	I1206 09:23:53.005992  558857 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:23:53.007077  558857 start.go:309] selected driver: docker
	I1206 09:23:53.007093  558857 start.go:927] validating driver "docker" against &{Name:functional-326325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326325 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:23:53.007201  558857 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:23:53.008892  558857 out.go:203] 
	W1206 09:23:53.009877  558857 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:23:53.010842  558857 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (11.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-326325 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-326325 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-wgsqq" [4a1e667d-a453-4644-b15c-adb76ebe9961] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-wgsqq" [4a1e667d-a453-4644-b15c-adb76ebe9961] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00340668s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 service hello-node-connect --url
2025/12/06 09:24:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30097
functional_test.go:1680: http://192.168.49.2:30097: success! body:
Request served by hello-node-connect-9f67c86d4-wgsqq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30097
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (11.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f7bbedb7-406f-4371-ac81-adbb447452f8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004666027s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-326325 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-326325 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-326325 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-326325 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:23:58.393230  502867 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9aee6603-ddbe-4751-aa29-78bcac0891cc] Pending
helpers_test.go:352: "sp-pod" [9aee6603-ddbe-4751-aa29-78bcac0891cc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9aee6603-ddbe-4751-aa29-78bcac0891cc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003909068s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-326325 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-326325 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-326325 delete -f testdata/storage-provisioner/pod.yaml: (1.081314689s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-326325 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:24:10.724064  502867 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bbc2cb17-e955-4f59-9c52-852608663ff5] Pending
helpers_test.go:352: "sp-pod" [bbc2cb17-e955-4f59-9c52-852608663ff5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [bbc2cb17-e955-4f59-9c52-852608663ff5] Running
E1206 09:24:13.227222  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003814958s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-326325 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh -n functional-326325 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cp functional-326325:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2365854852/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh -n functional-326325 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh -n functional-326325 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (16.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-326325 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-dkfqm" [018ab703-0019-4421-b743-e0ec40d68869] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-dkfqm" [018ab703-0019-4421-b743-e0ec40d68869] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 15.004053579s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-326325 exec mysql-844cf969f6-dkfqm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-326325 exec mysql-844cf969f6-dkfqm -- mysql -ppassword -e "show databases;": exit status 1 (99.184778ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:23:58.543944  502867 retry.go:31] will retry after 1.430167891s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-326325 exec mysql-844cf969f6-dkfqm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (16.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/502867/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo cat /etc/test/nested/copy/502867/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/502867.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo cat /etc/ssl/certs/502867.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/502867.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo cat /usr/share/ca-certificates/502867.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5028672.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo cat /etc/ssl/certs/5028672.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5028672.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo cat /usr/share/ca-certificates/5028672.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-326325 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh "sudo systemctl is-active docker": exit status 1 (284.896332ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh "sudo systemctl is-active containerd": exit status 1 (278.096041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-326325 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-326325 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-5zwmr" [df9b8647-8663-4e0f-9571-6ac7d81e763c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-5zwmr" [df9b8647-8663-4e0f-9571-6ac7d81e763c] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003428345s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "350.825306ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "73.582376ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "459.320985ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "88.487533ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1130461610/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765013025723522287" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1130461610/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765013025723522287" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1130461610/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765013025723522287" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1130461610/001/test-1765013025723522287
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.287611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:23:46.076226  502867 retry.go:31] will retry after 264.5648ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:23 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:23 test-1765013025723522287
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh cat /mount-9p/test-1765013025723522287
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-326325 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e987ff9e-35eb-4e07-8014-e4e7422fce53] Pending
helpers_test.go:352: "busybox-mount" [e987ff9e-35eb-4e07-8014-e4e7422fce53] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e987ff9e-35eb-4e07-8014-e4e7422fce53] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e987ff9e-35eb-4e07-8014-e4e7422fce53] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003860433s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-326325 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1130461610/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-326325 image ls --format short --alsologtostderr: (1.087846813s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326325 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-326325
localhost/kicbase/echo-server:functional-326325
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326325 image ls --format short --alsologtostderr:
I1206 09:24:07.738634  563026 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:07.738749  563026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:07.738760  563026 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:07.738767  563026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:07.739085  563026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:24:07.739887  563026 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:07.740024  563026 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:07.740714  563026 cli_runner.go:164] Run: docker container inspect functional-326325 --format={{.State.Status}}
I1206 09:24:07.765773  563026 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:07.765843  563026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326325
I1206 09:24:07.788148  563026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-326325/id_rsa Username:docker}
I1206 09:24:07.892904  563026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326325 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ localhost/minikube-local-cache-test     │ functional-326325  │ 3beb8af58991b │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-326325  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326325 image ls --format table --alsologtostderr:
I1206 09:24:12.025091  563717 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:12.025188  563717 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:12.025193  563717 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:12.025196  563717 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:12.025406  563717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:24:12.026201  563717 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:12.026300  563717 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:12.026785  563717 cli_runner.go:164] Run: docker container inspect functional-326325 --format={{.State.Status}}
I1206 09:24:12.045560  563717 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:12.045610  563717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326325
I1206 09:24:12.067005  563717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-326325/id_rsa Username:docker}
I1206 09:24:12.164410  563717 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326325 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5
b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@s
ha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3beb8af58991b39e8db2593350af5e6612b1e866a77fb7b7ea6128568dc21fee","repoDigests":["localhost/minikube-local-cache-test@sha256:b41674633f701617da907c401def7fcc5eb1f7653f86df0901693a6c201d9dea"],"repoTags":["localhost/minikube-local-cache-test:functional-326325"],"size":"3330"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07
bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba
3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-326325"],"size":"4945146"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df5
9a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac
4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326325 image ls --format json --alsologtostderr:
I1206 09:24:11.787287  563595 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:11.787591  563595 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:11.787607  563595 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:11.787614  563595 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:11.787841  563595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:24:11.788577  563595 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:11.788726  563595 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:11.789298  563595 cli_runner.go:164] Run: docker container inspect functional-326325 --format={{.State.Status}}
I1206 09:24:11.810163  563595 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:11.810224  563595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326325
I1206 09:24:11.828522  563595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-326325/id_rsa Username:docker}
I1206 09:24:11.922653  563595 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326325 image ls --format yaml --alsologtostderr:
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3beb8af58991b39e8db2593350af5e6612b1e866a77fb7b7ea6128568dc21fee
repoDigests:
- localhost/minikube-local-cache-test@sha256:b41674633f701617da907c401def7fcc5eb1f7653f86df0901693a6c201d9dea
repoTags:
- localhost/minikube-local-cache-test:functional-326325
size: "3330"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-326325
size: "4945146"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326325 image ls --format yaml --alsologtostderr:
I1206 09:24:08.813761  563092 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:08.813853  563092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:08.813860  563092 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:08.813865  563092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:08.814066  563092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:24:08.814654  563092 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:08.814756  563092 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:08.815171  563092 cli_runner.go:164] Run: docker container inspect functional-326325 --format={{.State.Status}}
I1206 09:24:08.833401  563092 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:08.833474  563092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326325
I1206 09:24:08.850144  563092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-326325/id_rsa Username:docker}
I1206 09:24:08.945511  563092 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh pgrep buildkitd: exit status 1 (272.938604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image build -t localhost/my-image:functional-326325 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-326325 image build -t localhost/my-image:functional-326325 testdata/build --alsologtostderr: (3.291291723s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326325 image build -t localhost/my-image:functional-326325 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 98513460e5a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-326325
--> 49c45cb8629
Successfully tagged localhost/my-image:functional-326325
49c45cb862971b5a90c09382023f42c65a93e0c4f2bcf3f4d2161fe5e44fd097
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326325 image build -t localhost/my-image:functional-326325 testdata/build --alsologtostderr:
I1206 09:24:09.316587  563285 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:09.316698  563285 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:09.316707  563285 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:09.316711  563285 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:09.316902  563285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
I1206 09:24:09.317478  563285 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:09.318139  563285 config.go:182] Loaded profile config "functional-326325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:24:09.318649  563285 cli_runner.go:164] Run: docker container inspect functional-326325 --format={{.State.Status}}
I1206 09:24:09.336134  563285 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:09.336183  563285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326325
I1206 09:24:09.352543  563285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/functional-326325/id_rsa Username:docker}
I1206 09:24:09.446420  563285 build_images.go:162] Building image from path: /tmp/build.3750868101.tar
I1206 09:24:09.446514  563285 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:24:09.454932  563285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3750868101.tar
I1206 09:24:09.458921  563285 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3750868101.tar: stat -c "%s %y" /var/lib/minikube/build/build.3750868101.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3750868101.tar': No such file or directory
I1206 09:24:09.458948  563285 ssh_runner.go:362] scp /tmp/build.3750868101.tar --> /var/lib/minikube/build/build.3750868101.tar (3072 bytes)
I1206 09:24:09.479066  563285 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3750868101
I1206 09:24:09.487938  563285 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3750868101 -xf /var/lib/minikube/build/build.3750868101.tar
I1206 09:24:09.496183  563285 crio.go:315] Building image: /var/lib/minikube/build/build.3750868101
I1206 09:24:09.496243  563285 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-326325 /var/lib/minikube/build/build.3750868101 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 09:24:12.524621  563285 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-326325 /var/lib/minikube/build/build.3750868101 --cgroup-manager=cgroupfs: (3.028351974s)
I1206 09:24:12.524695  563285 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3750868101
I1206 09:24:12.532838  563285 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3750868101.tar
I1206 09:24:12.540108  563285 build_images.go:218] Built localhost/my-image:functional-326325 from /tmp/build.3750868101.tar
I1206 09:24:12.540148  563285 build_images.go:134] succeeded building to: functional-326325
I1206 09:24:12.540153  563285 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-326325
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image load --daemon kicbase/echo-server:functional-326325 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-326325 image load --daemon kicbase/echo-server:functional-326325 --alsologtostderr: (3.503025594s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 service list -o json
functional_test.go:1504: Took "352.250203ms" to run "out/minikube-linux-amd64 -p functional-326325 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image load --daemon kicbase/echo-server:functional-326325 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32749
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-326325
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image load --daemon kicbase/echo-server:functional-326325 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32749
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image save kicbase/echo-server:functional-326325 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image rm kicbase/echo-server:functional-326325 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-326325
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 image save --daemon kicbase/echo-server:functional-326325 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-326325
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo270623002/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.037863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:23:55.986687  502867 retry.go:31] will retry after 707.466162ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo270623002/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh "sudo umount -f /mount-9p": exit status 1 (263.883625ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-326325 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo270623002/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-326325 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-326325 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-326325 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-326325 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 560533: os: process already finished
helpers_test.go:519: unable to terminate pid 560337: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-326325 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-326325 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [56f2f846-7d1e-464c-91d1-366f85a7bf80] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [56f2f846-7d1e-464c-91d1-366f85a7bf80] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004185229s
I1206 09:24:06.076517  502867 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3646089604/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3646089604/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3646089604/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T" /mount1: exit status 1 (337.019916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:23:58.115840  502867 retry.go:31] will retry after 333.908001ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326325 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-326325 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3646089604/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3646089604/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326325 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3646089604/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-326325 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.218.207 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-326325 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-326325
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-326325
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-326325
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (147.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1206 09:24:40.934018  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:47.667892  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:47.674310  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:47.685622  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:47.706989  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:47.748374  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:47.829885  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:47.991644  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:48.313391  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:48.955377  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:50.237016  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:52.798415  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:25:57.920685  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:26:08.162176  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:26:28.644164  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m26.883151066s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (147.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 kubectl -- rollout status deployment/busybox: (3.532219567s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-866qx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-hwbv7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-xx44g -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-866qx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-hwbv7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-xx44g -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-866qx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-hwbv7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-xx44g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-866qx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-866qx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-hwbv7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-hwbv7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-xx44g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 kubectl -- exec busybox-7b57f96db7-xx44g -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 node add --alsologtostderr -v 5
E1206 09:27:09.605622  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 node add --alsologtostderr -v 5: (25.206300463s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-265954 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp testdata/cp-test.txt ha-265954:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536474070/001/cp-test_ha-265954.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954:/home/docker/cp-test.txt ha-265954-m02:/home/docker/cp-test_ha-265954_ha-265954-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test_ha-265954_ha-265954-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954:/home/docker/cp-test.txt ha-265954-m03:/home/docker/cp-test_ha-265954_ha-265954-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test_ha-265954_ha-265954-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954:/home/docker/cp-test.txt ha-265954-m04:/home/docker/cp-test_ha-265954_ha-265954-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test_ha-265954_ha-265954-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp testdata/cp-test.txt ha-265954-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536474070/001/cp-test_ha-265954-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m02:/home/docker/cp-test.txt ha-265954:/home/docker/cp-test_ha-265954-m02_ha-265954.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test_ha-265954-m02_ha-265954.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m02:/home/docker/cp-test.txt ha-265954-m03:/home/docker/cp-test_ha-265954-m02_ha-265954-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test_ha-265954-m02_ha-265954-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m02:/home/docker/cp-test.txt ha-265954-m04:/home/docker/cp-test_ha-265954-m02_ha-265954-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test_ha-265954-m02_ha-265954-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp testdata/cp-test.txt ha-265954-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536474070/001/cp-test_ha-265954-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m03:/home/docker/cp-test.txt ha-265954:/home/docker/cp-test_ha-265954-m03_ha-265954.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test_ha-265954-m03_ha-265954.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m03:/home/docker/cp-test.txt ha-265954-m02:/home/docker/cp-test_ha-265954-m03_ha-265954-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test_ha-265954-m03_ha-265954-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m03:/home/docker/cp-test.txt ha-265954-m04:/home/docker/cp-test_ha-265954-m03_ha-265954-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test_ha-265954-m03_ha-265954-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp testdata/cp-test.txt ha-265954-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536474070/001/cp-test_ha-265954-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m04:/home/docker/cp-test.txt ha-265954:/home/docker/cp-test_ha-265954-m04_ha-265954.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954 "sudo cat /home/docker/cp-test_ha-265954-m04_ha-265954.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m04:/home/docker/cp-test.txt ha-265954-m02:/home/docker/cp-test_ha-265954-m04_ha-265954-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m02 "sudo cat /home/docker/cp-test_ha-265954-m04_ha-265954-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 cp ha-265954-m04:/home/docker/cp-test.txt ha-265954-m03:/home/docker/cp-test_ha-265954-m04_ha-265954-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 ssh -n ha-265954-m03 "sudo cat /home/docker/cp-test_ha-265954-m04_ha-265954-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 node stop m02 --alsologtostderr -v 5: (12.626768802s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5: exit status 7 (675.565018ms)

                                                
                                                
-- stdout --
	ha-265954
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-265954-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-265954-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-265954-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:27:52.161550  583687 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:52.161824  583687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:52.161835  583687 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:52.161841  583687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:52.162056  583687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:27:52.162314  583687 out.go:368] Setting JSON to false
	I1206 09:27:52.162345  583687 mustload.go:66] Loading cluster: ha-265954
	I1206 09:27:52.162492  583687 notify.go:221] Checking for updates...
	I1206 09:27:52.162811  583687 config.go:182] Loaded profile config "ha-265954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:27:52.162845  583687 status.go:174] checking status of ha-265954 ...
	I1206 09:27:52.163440  583687 cli_runner.go:164] Run: docker container inspect ha-265954 --format={{.State.Status}}
	I1206 09:27:52.183363  583687 status.go:371] ha-265954 host status = "Running" (err=<nil>)
	I1206 09:27:52.183417  583687 host.go:66] Checking if "ha-265954" exists ...
	I1206 09:27:52.183833  583687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-265954
	I1206 09:27:52.201050  583687 host.go:66] Checking if "ha-265954" exists ...
	I1206 09:27:52.201322  583687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:27:52.201381  583687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-265954
	I1206 09:27:52.220238  583687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/ha-265954/id_rsa Username:docker}
	I1206 09:27:52.311896  583687 ssh_runner.go:195] Run: systemctl --version
	I1206 09:27:52.318331  583687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:27:52.330699  583687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:52.384819  583687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-06 09:27:52.375342272 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:52.385544  583687 kubeconfig.go:125] found "ha-265954" server: "https://192.168.49.254:8443"
	I1206 09:27:52.385589  583687 api_server.go:166] Checking apiserver status ...
	I1206 09:27:52.385633  583687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:27:52.397545  583687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	W1206 09:27:52.405942  583687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:27:52.405984  583687 ssh_runner.go:195] Run: ls
	I1206 09:27:52.409646  583687 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 09:27:52.413738  583687 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 09:27:52.413758  583687 status.go:463] ha-265954 apiserver status = Running (err=<nil>)
	I1206 09:27:52.413768  583687 status.go:176] ha-265954 status: &{Name:ha-265954 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:27:52.413785  583687 status.go:174] checking status of ha-265954-m02 ...
	I1206 09:27:52.414011  583687 cli_runner.go:164] Run: docker container inspect ha-265954-m02 --format={{.State.Status}}
	I1206 09:27:52.431699  583687 status.go:371] ha-265954-m02 host status = "Stopped" (err=<nil>)
	I1206 09:27:52.431718  583687 status.go:384] host is not running, skipping remaining checks
	I1206 09:27:52.431724  583687 status.go:176] ha-265954-m02 status: &{Name:ha-265954-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:27:52.431750  583687 status.go:174] checking status of ha-265954-m03 ...
	I1206 09:27:52.432036  583687 cli_runner.go:164] Run: docker container inspect ha-265954-m03 --format={{.State.Status}}
	I1206 09:27:52.449731  583687 status.go:371] ha-265954-m03 host status = "Running" (err=<nil>)
	I1206 09:27:52.449753  583687 host.go:66] Checking if "ha-265954-m03" exists ...
	I1206 09:27:52.450054  583687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-265954-m03
	I1206 09:27:52.466590  583687 host.go:66] Checking if "ha-265954-m03" exists ...
	I1206 09:27:52.466882  583687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:27:52.466928  583687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-265954-m03
	I1206 09:27:52.484318  583687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/ha-265954-m03/id_rsa Username:docker}
	I1206 09:27:52.574506  583687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:27:52.586726  583687 kubeconfig.go:125] found "ha-265954" server: "https://192.168.49.254:8443"
	I1206 09:27:52.586751  583687 api_server.go:166] Checking apiserver status ...
	I1206 09:27:52.586781  583687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:27:52.597097  583687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W1206 09:27:52.605085  583687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:27:52.605135  583687 ssh_runner.go:195] Run: ls
	I1206 09:27:52.608610  583687 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 09:27:52.612563  583687 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 09:27:52.612586  583687 status.go:463] ha-265954-m03 apiserver status = Running (err=<nil>)
	I1206 09:27:52.612594  583687 status.go:176] ha-265954-m03 status: &{Name:ha-265954-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:27:52.612608  583687 status.go:174] checking status of ha-265954-m04 ...
	I1206 09:27:52.612892  583687 cli_runner.go:164] Run: docker container inspect ha-265954-m04 --format={{.State.Status}}
	I1206 09:27:52.631360  583687 status.go:371] ha-265954-m04 host status = "Running" (err=<nil>)
	I1206 09:27:52.631381  583687 host.go:66] Checking if "ha-265954-m04" exists ...
	I1206 09:27:52.631656  583687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-265954-m04
	I1206 09:27:52.649043  583687 host.go:66] Checking if "ha-265954-m04" exists ...
	I1206 09:27:52.649335  583687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:27:52.649381  583687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-265954-m04
	I1206 09:27:52.668665  583687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/ha-265954-m04/id_rsa Username:docker}
	I1206 09:27:52.759955  583687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:27:52.772859  583687 status.go:176] ha-265954-m04 status: &{Name:ha-265954-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 node start m02 --alsologtostderr -v 5: (13.327776843s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 stop --alsologtostderr -v 5
E1206 09:28:31.528020  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:42.664266  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:42.670828  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:42.682315  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:42.703757  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:42.745426  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:42.826878  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:42.988400  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:43.310394  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:43.952149  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:45.233819  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:47.795783  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:28:52.917314  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 stop --alsologtostderr -v 5: (52.016789113s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 start --wait true --alsologtostderr -v 5
E1206 09:29:03.159353  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:13.227378  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:23.641688  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:30:04.603994  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 start --wait true --alsologtostderr -v 5: (1m18.419130501s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 node delete m03 --alsologtostderr -v 5: (9.790939331s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 stop --alsologtostderr -v 5
E1206 09:30:47.669976  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 stop --alsologtostderr -v 5: (41.224090728s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5: exit status 7 (125.122646ms)

                                                
                                                
-- stdout --
	ha-265954
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-265954-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-265954-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:31:11.803908  598164 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:31:11.804040  598164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:11.804046  598164 out.go:374] Setting ErrFile to fd 2...
	I1206 09:31:11.804050  598164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:11.804257  598164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:31:11.804422  598164 out.go:368] Setting JSON to false
	I1206 09:31:11.804447  598164 mustload.go:66] Loading cluster: ha-265954
	I1206 09:31:11.804600  598164 notify.go:221] Checking for updates...
	I1206 09:31:11.804863  598164 config.go:182] Loaded profile config "ha-265954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:31:11.804881  598164 status.go:174] checking status of ha-265954 ...
	I1206 09:31:11.805318  598164 cli_runner.go:164] Run: docker container inspect ha-265954 --format={{.State.Status}}
	I1206 09:31:11.825075  598164 status.go:371] ha-265954 host status = "Stopped" (err=<nil>)
	I1206 09:31:11.825107  598164 status.go:384] host is not running, skipping remaining checks
	I1206 09:31:11.825114  598164 status.go:176] ha-265954 status: &{Name:ha-265954 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:31:11.825139  598164 status.go:174] checking status of ha-265954-m02 ...
	I1206 09:31:11.825409  598164 cli_runner.go:164] Run: docker container inspect ha-265954-m02 --format={{.State.Status}}
	I1206 09:31:11.843753  598164 status.go:371] ha-265954-m02 host status = "Stopped" (err=<nil>)
	I1206 09:31:11.843787  598164 status.go:384] host is not running, skipping remaining checks
	I1206 09:31:11.843798  598164 status.go:176] ha-265954-m02 status: &{Name:ha-265954-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:31:11.843826  598164 status.go:174] checking status of ha-265954-m04 ...
	I1206 09:31:11.844131  598164 cli_runner.go:164] Run: docker container inspect ha-265954-m04 --format={{.State.Status}}
	I1206 09:31:11.861997  598164 status.go:371] ha-265954-m04 host status = "Stopped" (err=<nil>)
	I1206 09:31:11.862034  598164 status.go:384] host is not running, skipping remaining checks
	I1206 09:31:11.862043  598164 status.go:176] ha-265954-m04 status: &{Name:ha-265954-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1206 09:31:15.370746  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:31:26.525898  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.594498497s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-265954 node add --control-plane --alsologtostderr -v 5: (36.605315945s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-265954 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-933492 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-933492 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.591571897s)
--- PASS: TestJSONOutput/start/Command (38.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.2s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-933492 --output=json --user=testUser
E1206 09:33:42.666804  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-933492 --output=json --user=testUser: (6.200059171s)
--- PASS: TestJSONOutput/stop/Command (6.20s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-093567 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-093567 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.752452ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d88da84-877b-4630-86c7-674dcf78d389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-093567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"43c9e640-2fad-41e3-ad39-c17168f38d80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"08b967da-8cdc-4381-9656-ec8c35f94cc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"98de47d2-9ae6-4ee8-9de0-766e13f32a16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig"}}
	{"specversion":"1.0","id":"22b56ad3-77f3-462f-94ef-b8d898745dd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube"}}
	{"specversion":"1.0","id":"17fc83f6-f4a6-47fc-9956-8992f9bfbcea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cc898d5e-6587-458f-9613-7cc5dfdb562d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4fc6102a-27b3-4e54-8900-d32982bb3ef7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-093567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-093567
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-975880 --network=
E1206 09:34:10.369770  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:34:13.227722  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-975880 --network=: (34.033985251s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-975880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-975880
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-975880: (2.143190621s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-013781 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-013781 --network=bridge: (20.233761398s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-013781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-013781
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-013781: (2.006934776s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.26s)

                                                
                                    
x
+
TestKicExistingNetwork (25.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1206 09:34:47.793474  502867 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 09:34:47.809687  502867 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 09:34:47.809749  502867 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1206 09:34:47.809776  502867 cli_runner.go:164] Run: docker network inspect existing-network
W1206 09:34:47.826707  502867 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1206 09:34:47.826738  502867 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1206 09:34:47.826754  502867 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1206 09:34:47.826901  502867 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 09:34:47.842781  502867 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-14a29a83a969 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ed:93:6c:14:a3} reservation:<nil>}
I1206 09:34:47.843095  502867 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e26df0}
I1206 09:34:47.843129  502867 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1206 09:34:47.843175  502867 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1206 09:34:47.888262  502867 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-288866 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-288866 --network=existing-network: (23.367558881s)
helpers_test.go:175: Cleaning up "existing-network-288866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-288866
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-288866: (2.024925644s)
I1206 09:35:13.297261  502867 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.52s)

                                                
                                    
x
+
TestKicCustomSubnet (23.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-629125 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-629125 --subnet=192.168.60.0/24: (21.668332224s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-629125 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-629125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-629125
E1206 09:35:36.295619  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-629125: (2.148852447s)
--- PASS: TestKicCustomSubnet (23.84s)

                                                
                                    
x
+
TestKicStaticIP (25.62s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-966541 --static-ip=192.168.200.200
E1206 09:35:47.670619  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-966541 --static-ip=192.168.200.200: (23.294855559s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-966541 ip
helpers_test.go:175: Cleaning up "static-ip-966541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-966541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-966541: (2.168271524s)
--- PASS: TestKicStaticIP (25.62s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-975679 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-975679 --driver=docker  --container-runtime=crio: (22.144245695s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-979026 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-979026 --driver=docker  --container-runtime=crio: (21.826754985s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-975679
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-979026
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-979026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-979026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-979026: (2.353470733s)
helpers_test.go:175: Cleaning up "first-975679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-975679
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-975679: (2.409861672s)
--- PASS: TestMinikubeProfile (49.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-227796 --memory=3072 --mount-string /tmp/TestMountStartserial2156045376/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-227796 --memory=3072 --mount-string /tmp/TestMountStartserial2156045376/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.758880674s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-227796 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-245793 --memory=3072 --mount-string /tmp/TestMountStartserial2156045376/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-245793 --memory=3072 --mount-string /tmp/TestMountStartserial2156045376/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.006326786s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245793 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-227796 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-227796 --alsologtostderr -v=5: (1.688222516s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245793 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-245793
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-245793: (1.251938898s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-245793
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-245793: (6.669691047s)
--- PASS: TestMountStart/serial/RestartStopped (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245793 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-469254 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-469254 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.621041252s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-469254 -- rollout status deployment/busybox: (3.136704944s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-ctsxc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-rv2zj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-ctsxc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-rv2zj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-ctsxc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-rv2zj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-ctsxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-ctsxc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-rv2zj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-469254 -- exec busybox-7b57f96db7-rv2zj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-469254 -v=5 --alsologtostderr
E1206 09:38:42.664303  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-469254 -v=5 --alsologtostderr: (24.9509121s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-469254 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp testdata/cp-test.txt multinode-469254:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2543193404/001/cp-test_multinode-469254.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254:/home/docker/cp-test.txt multinode-469254-m02:/home/docker/cp-test_multinode-469254_multinode-469254-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m02 "sudo cat /home/docker/cp-test_multinode-469254_multinode-469254-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254:/home/docker/cp-test.txt multinode-469254-m03:/home/docker/cp-test_multinode-469254_multinode-469254-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m03 "sudo cat /home/docker/cp-test_multinode-469254_multinode-469254-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp testdata/cp-test.txt multinode-469254-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2543193404/001/cp-test_multinode-469254-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254-m02:/home/docker/cp-test.txt multinode-469254:/home/docker/cp-test_multinode-469254-m02_multinode-469254.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254 "sudo cat /home/docker/cp-test_multinode-469254-m02_multinode-469254.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254-m02:/home/docker/cp-test.txt multinode-469254-m03:/home/docker/cp-test_multinode-469254-m02_multinode-469254-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m03 "sudo cat /home/docker/cp-test_multinode-469254-m02_multinode-469254-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp testdata/cp-test.txt multinode-469254-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2543193404/001/cp-test_multinode-469254-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254-m03:/home/docker/cp-test.txt multinode-469254:/home/docker/cp-test_multinode-469254-m03_multinode-469254.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254 "sudo cat /home/docker/cp-test_multinode-469254-m03_multinode-469254.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 cp multinode-469254-m03:/home/docker/cp-test.txt multinode-469254-m02:/home/docker/cp-test_multinode-469254-m03_multinode-469254-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 ssh -n multinode-469254-m02 "sudo cat /home/docker/cp-test_multinode-469254-m03_multinode-469254-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-469254 node stop m03: (1.273989057s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-469254 status: exit status 7 (493.432751ms)

                                                
                                                
-- stdout --
	multinode-469254
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-469254-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-469254-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr: exit status 7 (491.082988ms)

                                                
                                                
-- stdout --
	multinode-469254
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-469254-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-469254-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:39:12.647536  657951 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:39:12.647777  657951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:39:12.647786  657951 out.go:374] Setting ErrFile to fd 2...
	I1206 09:39:12.647791  657951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:39:12.648017  657951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:39:12.648193  657951 out.go:368] Setting JSON to false
	I1206 09:39:12.648219  657951 mustload.go:66] Loading cluster: multinode-469254
	I1206 09:39:12.648301  657951 notify.go:221] Checking for updates...
	I1206 09:39:12.648708  657951 config.go:182] Loaded profile config "multinode-469254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:39:12.648733  657951 status.go:174] checking status of multinode-469254 ...
	I1206 09:39:12.649295  657951 cli_runner.go:164] Run: docker container inspect multinode-469254 --format={{.State.Status}}
	I1206 09:39:12.670528  657951 status.go:371] multinode-469254 host status = "Running" (err=<nil>)
	I1206 09:39:12.670568  657951 host.go:66] Checking if "multinode-469254" exists ...
	I1206 09:39:12.670846  657951 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-469254
	I1206 09:39:12.689116  657951 host.go:66] Checking if "multinode-469254" exists ...
	I1206 09:39:12.689425  657951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:39:12.689482  657951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-469254
	I1206 09:39:12.706599  657951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/multinode-469254/id_rsa Username:docker}
	I1206 09:39:12.798038  657951 ssh_runner.go:195] Run: systemctl --version
	I1206 09:39:12.804697  657951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:39:12.816821  657951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:39:12.872356  657951 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-06 09:39:12.862527661 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:39:12.872913  657951 kubeconfig.go:125] found "multinode-469254" server: "https://192.168.67.2:8443"
	I1206 09:39:12.872949  657951 api_server.go:166] Checking apiserver status ...
	I1206 09:39:12.873001  657951 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:12.884922  657951 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup
	W1206 09:39:12.893366  657951 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:39:12.893421  657951 ssh_runner.go:195] Run: ls
	I1206 09:39:12.897018  657951 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1206 09:39:12.901034  657951 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1206 09:39:12.901060  657951 status.go:463] multinode-469254 apiserver status = Running (err=<nil>)
	I1206 09:39:12.901080  657951 status.go:176] multinode-469254 status: &{Name:multinode-469254 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:39:12.901100  657951 status.go:174] checking status of multinode-469254-m02 ...
	I1206 09:39:12.901402  657951 cli_runner.go:164] Run: docker container inspect multinode-469254-m02 --format={{.State.Status}}
	I1206 09:39:12.918647  657951 status.go:371] multinode-469254-m02 host status = "Running" (err=<nil>)
	I1206 09:39:12.918672  657951 host.go:66] Checking if "multinode-469254-m02" exists ...
	I1206 09:39:12.918934  657951 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-469254-m02
	I1206 09:39:12.937187  657951 host.go:66] Checking if "multinode-469254-m02" exists ...
	I1206 09:39:12.937445  657951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:39:12.937522  657951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-469254-m02
	I1206 09:39:12.954595  657951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/22047-499330/.minikube/machines/multinode-469254-m02/id_rsa Username:docker}
	I1206 09:39:13.044823  657951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:39:13.056902  657951 status.go:176] multinode-469254-m02 status: &{Name:multinode-469254-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:39:13.056944  657951 status.go:174] checking status of multinode-469254-m03 ...
	I1206 09:39:13.057292  657951 cli_runner.go:164] Run: docker container inspect multinode-469254-m03 --format={{.State.Status}}
	I1206 09:39:13.074590  657951 status.go:371] multinode-469254-m03 host status = "Stopped" (err=<nil>)
	I1206 09:39:13.074614  657951 status.go:384] host is not running, skipping remaining checks
	I1206 09:39:13.074623  657951 status.go:176] multinode-469254-m03 status: &{Name:multinode-469254-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 node start m03 -v=5 --alsologtostderr
E1206 09:39:13.227603  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-469254 node start m03 -v=5 --alsologtostderr: (6.542918782s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-469254
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-469254
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-469254: (29.573997957s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-469254 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-469254 --wait=true -v=5 --alsologtostderr: (45.802846066s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-469254
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-469254 node delete m03: (4.671611359s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 stop
E1206 09:40:47.667945  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-469254 stop: (28.340611718s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-469254 status: exit status 7 (101.122803ms)

                                                
                                                
-- stdout --
	multinode-469254
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-469254-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr: exit status 7 (101.258132ms)

                                                
                                                
-- stdout --
	multinode-469254
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-469254-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:41:09.587767  667773 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:41:09.587875  667773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:41:09.587883  667773 out.go:374] Setting ErrFile to fd 2...
	I1206 09:41:09.587887  667773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:41:09.588120  667773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:41:09.588306  667773 out.go:368] Setting JSON to false
	I1206 09:41:09.588333  667773 mustload.go:66] Loading cluster: multinode-469254
	I1206 09:41:09.588410  667773 notify.go:221] Checking for updates...
	I1206 09:41:09.588834  667773 config.go:182] Loaded profile config "multinode-469254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:41:09.588993  667773 status.go:174] checking status of multinode-469254 ...
	I1206 09:41:09.590310  667773 cli_runner.go:164] Run: docker container inspect multinode-469254 --format={{.State.Status}}
	I1206 09:41:09.612074  667773 status.go:371] multinode-469254 host status = "Stopped" (err=<nil>)
	I1206 09:41:09.612100  667773 status.go:384] host is not running, skipping remaining checks
	I1206 09:41:09.612108  667773 status.go:176] multinode-469254 status: &{Name:multinode-469254 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:41:09.612134  667773 status.go:174] checking status of multinode-469254-m02 ...
	I1206 09:41:09.612375  667773 cli_runner.go:164] Run: docker container inspect multinode-469254-m02 --format={{.State.Status}}
	I1206 09:41:09.629195  667773 status.go:371] multinode-469254-m02 host status = "Stopped" (err=<nil>)
	I1206 09:41:09.629217  667773 status.go:384] host is not running, skipping remaining checks
	I1206 09:41:09.629226  667773 status.go:176] multinode-469254-m02 status: &{Name:multinode-469254-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-469254 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-469254 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (44.162559343s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-469254 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-469254
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-469254-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-469254-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.37193ms)

                                                
                                                
-- stdout --
	* [multinode-469254-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-469254-m02' is duplicated with machine name 'multinode-469254-m02' in profile 'multinode-469254'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-469254-m03 --driver=docker  --container-runtime=crio
E1206 09:42:10.732795  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-469254-m03 --driver=docker  --container-runtime=crio: (22.617586762s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-469254
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-469254: exit status 80 (285.53087ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-469254 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-469254-m03 already exists in multinode-469254-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-469254-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-469254-m03: (2.372878306s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.41s)

                                                
                                    
x
+
TestPreload (84.3s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-114820 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-114820 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (48.235758245s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-114820 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-114820 image pull gcr.io/k8s-minikube/busybox: (2.392819134s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-114820
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-114820: (7.964619436s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-114820 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1206 09:43:42.663450  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-114820 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (23.062257532s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-114820 image list
helpers_test.go:175: Cleaning up "test-preload-114820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-114820
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-114820: (2.418802752s)
--- PASS: TestPreload (84.30s)

                                                
                                    
x
+
TestScheduledStopUnix (98.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-912259 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-912259 --memory=3072 --driver=docker  --container-runtime=crio: (22.704022608s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912259 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:44:11.088070  684817 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:44:11.088319  684817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:44:11.088327  684817 out.go:374] Setting ErrFile to fd 2...
	I1206 09:44:11.088331  684817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:44:11.088558  684817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:44:11.088810  684817 out.go:368] Setting JSON to false
	I1206 09:44:11.088903  684817 mustload.go:66] Loading cluster: scheduled-stop-912259
	I1206 09:44:11.089210  684817 config.go:182] Loaded profile config "scheduled-stop-912259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:44:11.089293  684817 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/config.json ...
	I1206 09:44:11.089502  684817 mustload.go:66] Loading cluster: scheduled-stop-912259
	I1206 09:44:11.089619  684817 config.go:182] Loaded profile config "scheduled-stop-912259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-912259 -n scheduled-stop-912259
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912259 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:44:11.497954  684983 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:44:11.498210  684983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:44:11.498219  684983 out.go:374] Setting ErrFile to fd 2...
	I1206 09:44:11.498225  684983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:44:11.498446  684983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:44:11.498720  684983 out.go:368] Setting JSON to false
	I1206 09:44:11.498917  684983 daemonize_unix.go:73] killing process 684851 as it is an old scheduled stop
	I1206 09:44:11.499025  684983 mustload.go:66] Loading cluster: scheduled-stop-912259
	I1206 09:44:11.499364  684983 config.go:182] Loaded profile config "scheduled-stop-912259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:44:11.499435  684983 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/config.json ...
	I1206 09:44:11.499648  684983 mustload.go:66] Loading cluster: scheduled-stop-912259
	I1206 09:44:11.499751  684983 config.go:182] Loaded profile config "scheduled-stop-912259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 09:44:11.504700  502867 retry.go:31] will retry after 101.585µs: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.505894  502867 retry.go:31] will retry after 169.523µs: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.507055  502867 retry.go:31] will retry after 157.43µs: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.508177  502867 retry.go:31] will retry after 411.414µs: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.509312  502867 retry.go:31] will retry after 541.784µs: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.510448  502867 retry.go:31] will retry after 1.091228ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.512786  502867 retry.go:31] will retry after 1.527888ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.514996  502867 retry.go:31] will retry after 2.226252ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.518252  502867 retry.go:31] will retry after 3.375282ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.522505  502867 retry.go:31] will retry after 2.691934ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.525750  502867 retry.go:31] will retry after 4.16716ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.530997  502867 retry.go:31] will retry after 10.730682ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.542251  502867 retry.go:31] will retry after 8.391152ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.551742  502867 retry.go:31] will retry after 17.969906ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.570073  502867 retry.go:31] will retry after 27.761227ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
I1206 09:44:11.598345  502867 retry.go:31] will retry after 48.730398ms: open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912259 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1206 09:44:13.227408  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912259 -n scheduled-stop-912259
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-912259
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912259 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:44:37.411178  685629 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:44:37.411590  685629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:44:37.411599  685629 out.go:374] Setting ErrFile to fd 2...
	I1206 09:44:37.411603  685629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:44:37.411810  685629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:44:37.412076  685629 out.go:368] Setting JSON to false
	I1206 09:44:37.412155  685629 mustload.go:66] Loading cluster: scheduled-stop-912259
	I1206 09:44:37.412503  685629 config.go:182] Loaded profile config "scheduled-stop-912259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:44:37.412577  685629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/scheduled-stop-912259/config.json ...
	I1206 09:44:37.412765  685629 mustload.go:66] Loading cluster: scheduled-stop-912259
	I1206 09:44:37.412859  685629 config.go:182] Loaded profile config "scheduled-stop-912259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1206 09:45:05.732773  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-912259
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-912259: exit status 7 (88.557644ms)

                                                
                                                
-- stdout --
	scheduled-stop-912259
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912259 -n scheduled-stop-912259
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912259 -n scheduled-stop-912259: exit status 7 (85.671348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-912259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-912259
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-912259: (4.213525755s)
--- PASS: TestScheduledStopUnix (98.49s)

                                                
                                    
x
+
TestInsufficientStorage (12.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-941183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-941183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.552831069s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e1d1a632-5601-4589-84e7-b2557ac3577c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-941183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0487eea6-6c43-4081-845d-a62aaffcd4ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"aa613638-4848-45a5-abec-0a1f5f96f492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b168c1a1-52c1-4716-9dd6-84091c04f37f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig"}}
	{"specversion":"1.0","id":"7f5eb141-ebc4-42a8-9458-1cdfc91c805d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube"}}
	{"specversion":"1.0","id":"7a247c09-e177-4567-815f-b443e1ce89d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"24a5ab8f-cb6f-42f3-98cd-b887ceee3737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c94858fd-b07a-476c-9510-8b98d227fab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e98ca32c-393f-4eb0-960b-8d0b7459de39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0275ce20-c8b8-4b28-9326-f39af9ac0418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6cc84c0-ebd2-454e-b466-d59cbd504dcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5b7196ea-40ab-494a-a415-e3093743bea1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-941183\" primary control-plane node in \"insufficient-storage-941183\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fd64423-070e-4cc7-b9e7-d5d40c6941eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3beb785-93b8-4f34-adef-e7f35cee9cf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c829ee7-e871-4d3e-b62b-1274c616cedd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-941183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-941183 --output=json --layout=cluster: exit status 7 (294.82247ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-941183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-941183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 09:45:36.644736  688156 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-941183" does not appear in /home/jenkins/minikube-integration/22047-499330/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-941183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-941183 --output=json --layout=cluster: exit status 7 (296.70366ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-941183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-941183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 09:45:36.942390  688284 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-941183" does not appear in /home/jenkins/minikube-integration/22047-499330/kubeconfig
	E1206 09:45:36.953128  688284 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/insufficient-storage-941183/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-941183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-941183
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-941183: (1.931571108s)
--- PASS: TestInsufficientStorage (12.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (46.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.702321388 start -p running-upgrade-854014 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.702321388 start -p running-upgrade-854014 --memory=3072 --vm-driver=docker  --container-runtime=crio: (19.567776367s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-854014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-854014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.687891252s)
helpers_test.go:175: Cleaning up "running-upgrade-854014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-854014
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-854014: (2.421395131s)
--- PASS: TestRunningBinaryUpgrade (46.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (303.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.079724938s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-581224
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-581224: (1.943327539s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-581224 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-581224 status --format={{.Host}}: exit status 7 (92.853356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.730220289s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-581224 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.006171ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-581224] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-581224
	    minikube start -p kubernetes-upgrade-581224 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5812242 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-581224 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-581224 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.073168188s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-581224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-581224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-581224: (2.899202528s)
--- PASS: TestKubernetesUpgrade (303.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (94.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.488092747 start -p missing-upgrade-633386 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.488092747 start -p missing-upgrade-633386 --memory=3072 --driver=docker  --container-runtime=crio: (46.414326502s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-633386
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-633386: (2.507949727s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-633386
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-633386 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-633386 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.103120153s)
helpers_test.go:175: Cleaning up "missing-upgrade-633386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-633386
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-633386: (2.397044212s)
--- PASS: TestMissingContainerUpgrade (94.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184706 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-184706 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (102.168951ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-184706] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestPause/serial/Start (80.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-137950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-137950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.139438314s)
--- PASS: TestPause/serial/Start (80.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184706 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1206 09:45:47.671473  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184706 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.020376382s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-184706 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.026602154s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-184706 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-184706 status -o json: exit status 2 (354.833237ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-184706","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-184706
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-184706: (2.57356582s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184706 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.066495731s)
--- PASS: TestNoKubernetes/serial/Start (9.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22047-499330/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-184706 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-184706 "sudo systemctl is-active --quiet service kubelet": exit status 1 (386.748934ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-184706
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-184706: (1.289040724s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184706 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184706 --driver=docker  --container-runtime=crio: (7.004369647s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-137950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-137950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.028586806s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-184706 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-184706 "sudo systemctl is-active --quiet service kubelet": exit status 1 (327.611764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (290.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3175087442 start -p stopped-upgrade-031481 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3175087442 start -p stopped-upgrade-031481 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.757839187s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3175087442 -p stopped-upgrade-031481 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3175087442 -p stopped-upgrade-031481 stop: (1.971601094s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-031481 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-031481 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.496946996s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (290.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-983381 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-983381 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (179.394828ms)

                                                
                                                
-- stdout --
	* [false-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:49:06.168969  739804 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:49:06.169241  739804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:49:06.169251  739804 out.go:374] Setting ErrFile to fd 2...
	I1206 09:49:06.169257  739804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:49:06.169466  739804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-499330/.minikube/bin
	I1206 09:49:06.169951  739804 out.go:368] Setting JSON to false
	I1206 09:49:06.171158  739804 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9090,"bootTime":1765005456,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:49:06.171223  739804 start.go:143] virtualization: kvm guest
	I1206 09:49:06.173609  739804 out.go:179] * [false-983381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:49:06.174690  739804 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:49:06.174750  739804 notify.go:221] Checking for updates...
	I1206 09:49:06.177238  739804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:49:06.178386  739804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-499330/kubeconfig
	I1206 09:49:06.179443  739804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-499330/.minikube
	I1206 09:49:06.180551  739804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:49:06.181653  739804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:49:06.183185  739804 config.go:182] Loaded profile config "cert-expiration-669264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:49:06.183330  739804 config.go:182] Loaded profile config "kubernetes-upgrade-581224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:49:06.183469  739804 config.go:182] Loaded profile config "stopped-upgrade-031481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:49:06.183589  739804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:49:06.210477  739804 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:49:06.210592  739804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:49:06.277140  739804 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-06 09:49:06.266241648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:49:06.277251  739804 docker.go:319] overlay module found
	I1206 09:49:06.278828  739804 out.go:179] * Using the docker driver based on user configuration
	I1206 09:49:06.279793  739804 start.go:309] selected driver: docker
	I1206 09:49:06.279808  739804 start.go:927] validating driver "docker" against <nil>
	I1206 09:49:06.279819  739804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:49:06.281293  739804 out.go:203] 
	W1206 09:49:06.282198  739804 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1206 09:49:06.283120  739804 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-983381 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-983381" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-669264
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-581224
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-031481
contexts:
- context:
cluster: cert-expiration-669264
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-669264
name: cert-expiration-669264
- context:
cluster: kubernetes-upgrade-581224
user: kubernetes-upgrade-581224
name: kubernetes-upgrade-581224
- context:
cluster: stopped-upgrade-031481
user: stopped-upgrade-031481
name: stopped-upgrade-031481
current-context: ""
kind: Config
users:
- name: cert-expiration-669264
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/cert-expiration-669264/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/cert-expiration-669264/client.key
- name: kubernetes-upgrade-581224
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.key
- name: stopped-upgrade-031481
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/stopped-upgrade-031481/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/stopped-upgrade-031481/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-983381

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983381"

                                                
                                                
----------------------- debugLogs end: false-983381 [took: 3.245184757s] --------------------------------
helpers_test.go:175: Cleaning up "false-983381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-983381
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.182853087s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-507108 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [112fc604-7ed8-485f-a853-ab890836965e] Pending
helpers_test.go:352: "busybox" [112fc604-7ed8-485f-a853-ab890836965e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [112fc604-7ed8-485f-a853-ab890836965e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003906572s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-507108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-507108 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-507108 --alsologtostderr -v=3: (16.066846968s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108: exit status 7 (84.100642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-507108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1206 09:50:47.667967  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-507108 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.502383292s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507108 -n old-k8s-version-507108
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (48.855876536s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bfcks" [cadf548c-150e-4634-bed4-cec0c3fc5041] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003980643s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bfcks" [cadf548c-150e-4634-bed4-cec0c3fc5041] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003344323s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-507108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-507108 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (47.197113402s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (39.472901376s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-521770 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a011c5ce-2ff8-4279-bed5-cf9ec25a1eb0] Pending
helpers_test.go:352: "busybox" [a011c5ce-2ff8-4279-bed5-cf9ec25a1eb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a011c5ce-2ff8-4279-bed5-cf9ec25a1eb0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004544522s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-521770 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-521770 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-521770 --alsologtostderr -v=3: (16.499945088s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-031481
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-031481: (1.271497007s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (22.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 09:52:16.297975  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (22.79011858s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (22.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770: exit status 7 (85.969218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-521770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-521770 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (51.144674983s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521770 -n no-preload-521770
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-759696 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [19743c1a-5c97-490a-bed0-702d9c410f3e] Pending
helpers_test.go:352: "busybox" [19743c1a-5c97-490a-bed0-702d9c410f3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [19743c1a-5c97-490a-bed0-702d9c410f3e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00352912s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-759696 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-997968 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [572be28a-1a60-48d5-95e5-a5355b5493ee] Pending
helpers_test.go:352: "busybox" [572be28a-1a60-48d5-95e5-a5355b5493ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [572be28a-1a60-48d5-95e5-a5355b5493ee] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004322425s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-997968 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-641599 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-641599 --alsologtostderr -v=3: (12.685968941s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-759696 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-759696 --alsologtostderr -v=3: (16.30373404s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-997968 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-997968 --alsologtostderr -v=3: (16.686713078s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599: exit status 7 (86.370454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-641599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-641599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.589041579s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-641599 -n newest-cni-641599
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696: exit status 7 (88.522559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-759696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-759696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (49.815374031s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759696 -n default-k8s-diff-port-759696
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-641599 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968: exit status 7 (96.188459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-997968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-997968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (43.658060553s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997968 -n embed-certs-997968
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m12.654642296s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-sd5kj" [cb4adcb3-2d17-41fd-a527-0697285f721d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003779686s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-sd5kj" [cb4adcb3-2d17-41fd-a527-0697285f721d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013576662s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-521770 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-521770 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1206 09:53:42.663887  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-326325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.341151414s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tkv7v" [6b384527-3c93-4f55-839a-bae4f1b854db] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004209333s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tc684" [48554eb1-e975-4229-8ee7-2e6aeb6ed273] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003247153s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tkv7v" [6b384527-3c93-4f55-839a-bae4f1b854db] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003980779s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-759696 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tc684" [48554eb1-e975-4229-8ee7-2e6aeb6ed273] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003233108s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-997968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759696 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-997968 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.298707678s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1206 09:54:13.227997  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.477591053s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5m5cx" [408d1778-35ab-4207-b869-35eccbbb8dfd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00337495s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-983381 "pgrep -a kubelet"
I1206 09:54:21.380489  502867 config.go:182] Loaded profile config "kindnet-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-983381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f59fk" [e0fb6e9f-ebc4-43f1-b850-7a23d4088f32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f59fk" [e0fb6e9f-ebc4-43f1-b850-7a23d4088f32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003982059s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-983381 "pgrep -a kubelet"
I1206 09:54:23.272487  502867 config.go:182] Loaded profile config "auto-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-983381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pbbkx" [668cddce-e099-4a57-87ea-544f44f3b0bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pbbkx" [668cddce-e099-4a57-87ea-544f44f3b0bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004331557s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-983381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-983381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.648160278s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m11.081669759s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hrrjb" [0b63aa4c-206a-4e2c-ac61-9bb1049471b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00528038s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-983381 "pgrep -a kubelet"
I1206 09:54:58.199770  502867 config.go:182] Loaded profile config "custom-flannel-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-983381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tg7mq" [a9c79b99-1900-45e0-b9e3-b92af267a65a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tg7mq" [a9c79b99-1900-45e0-b9e3-b92af267a65a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004281771s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-983381 "pgrep -a kubelet"
I1206 09:55:00.702708  502867 config.go:182] Loaded profile config "calico-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-983381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dm2tp" [b9266038-4f3d-44e4-90da-f4dae67fdb11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dm2tp" [b9266038-4f3d-44e4-90da-f4dae67fdb11] Running
E1206 09:55:04.776444  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:04.782887  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:04.794240  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:04.815600  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:04.857411  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:04.939055  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:05.100683  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:05.422512  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:06.064685  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:55:07.346606  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00401732s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-983381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-983381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-983381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (59.493148631s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-sh9fw" [7a7bfd73-d438-41e1-9745-50235b9e5d41] Running
E1206 09:55:45.754661  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/old-k8s-version-507108/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00384802s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-983381 "pgrep -a kubelet"
I1206 09:55:47.442562  502867 config.go:182] Loaded profile config "flannel-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-983381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hhp74" [b06a34a4-6c7a-435c-b131-d4c9ddca9cab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 09:55:47.667713  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/functional-857859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hhp74" [b06a34a4-6c7a-435c-b131-d4c9ddca9cab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004403277s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-983381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-983381 "pgrep -a kubelet"
I1206 09:56:04.996020  502867 config.go:182] Loaded profile config "bridge-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (7.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-983381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pxlm5" [a94d49f7-da81-4bfc-a567-560157b5e883] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pxlm5" [a94d49f7-da81-4bfc-a567-560157b5e883] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 7.0041407s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (7.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-983381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-983381 "pgrep -a kubelet"
I1206 09:56:30.995393  502867 config.go:182] Loaded profile config "enable-default-cni-983381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-983381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-25fpv" [fa350deb-4768-452a-af7d-46d6c086876d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-25fpv" [fa350deb-4768-452a-af7d-46d6c086876d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004197075s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-983381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-983381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
383 TestStartStop/group/disable-driver-mounts 0.2
387 TestNetworkPlugins/group/kubenet 3.33
395 TestNetworkPlugins/group/cilium 3.75
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-920129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-920129
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-983381 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-983381" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-669264
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-581224
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-031481
contexts:
- context:
cluster: cert-expiration-669264
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-669264
name: cert-expiration-669264
- context:
cluster: kubernetes-upgrade-581224
user: kubernetes-upgrade-581224
name: kubernetes-upgrade-581224
- context:
cluster: stopped-upgrade-031481
user: stopped-upgrade-031481
name: stopped-upgrade-031481
current-context: ""
kind: Config
users:
- name: cert-expiration-669264
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/cert-expiration-669264/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/cert-expiration-669264/client.key
- name: kubernetes-upgrade-581224
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.key
- name: stopped-upgrade-031481
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/stopped-upgrade-031481/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/stopped-upgrade-031481/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-983381

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983381"

                                                
                                                
----------------------- debugLogs end: kubenet-983381 [took: 3.167996008s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-983381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-983381
--- SKIP: TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1206 09:49:13.227217  502867 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/addons-101630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-983381 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-983381" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-669264
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-581224
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-499330/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-031481
contexts:
- context:
cluster: cert-expiration-669264
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:47:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-669264
name: cert-expiration-669264
- context:
cluster: kubernetes-upgrade-581224
user: kubernetes-upgrade-581224
name: kubernetes-upgrade-581224
- context:
cluster: stopped-upgrade-031481
user: stopped-upgrade-031481
name: stopped-upgrade-031481
current-context: ""
kind: Config
users:
- name: cert-expiration-669264
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/cert-expiration-669264/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/cert-expiration-669264/client.key
- name: kubernetes-upgrade-581224
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/kubernetes-upgrade-581224/client.key
- name: stopped-upgrade-031481
user:
client-certificate: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/stopped-upgrade-031481/client.crt
client-key: /home/jenkins/minikube-integration/22047-499330/.minikube/profiles/stopped-upgrade-031481/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-983381

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-983381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983381"

                                                
                                                
----------------------- debugLogs end: cilium-983381 [took: 3.592390488s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-983381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-983381
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
Copied to clipboard